Search results for: feature method
19526 The Implementation of Secton Method for Finding the Root of Interpolation Function
Authors: Nur Rokhman
Abstract:
A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.Keywords: Secton method, interpolation, non linear function, numerical solution
Procedia PDF Downloads 37919525 Verbal Prefix Selection in Old Japanese: A Corpus-Based Study
Authors: Zixi You
Abstract:
There are a number of verbal prefixes in Old Japanese. However, the selection or the compatibility of verbs and verbal prefixes is among the least investigated topics on Old Japanese language. Unlike other types of prefixes, verbal prefixes in dictionaries are more often than not listed with very brief information such as ‘unknown meaning’ or ‘rhythmic function only’. To fill in a part of this knowledge gap, this paper presents an exhaustive investigation based on the newly developed ‘Oxford Corpus of Old Japanese’ (OCOJ), which included nearly all existing resource of Old Japanese language, with detailed linguistics information in TEI-XML tags. In this paper, we propose the possibility that the following three prefixes, i-, sa-, ta- (with ta- being considered as a variation of sa-), are relevant to split intransitivity in Old Japanese, with evidence that unergative verbs favor i- and that unergative verbs favor sa-(ta-). This might be undermined by the fact that transitives are also found to follow i-. However, with several manifestations of split intransitivity in Old Japanese discussed, the behavior of transitives in verbal prefix selection is no longer as surprising as it may seem to be when one look at the selection of verbal prefix in isolation. It is possible that there are one or more features that played essential roles in determining the selection of i-, and the attested transitive verbs happen to have these features. The data suggest that this feature is a sense of ‘change’ of location or state involved in the event donated by the verb, which is a feature of typical unaccusatives. This is further discussed in the ‘affectedness’ hierarchy. The presentation of this paper, which includes a brief demonstration of the OCOJ, is expected to be of the interest of both specialists and general audiences.Keywords: old Japanese, split intransitivity, unaccusatives, unergatives, verbal prefix selection
Procedia PDF Downloads 41519524 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children
Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura
Abstract:
Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification
Procedia PDF Downloads 30119523 Stream Extraction from 1m-DTM Using ArcGIS
Authors: Jerald Ruta, Ricardo Villar, Jojemar Bantugan, Nycel Barbadillo, Jigg Pelayo
Abstract:
Streams are important in providing water supply for industrial, agricultural and human consumption, In short when there are streams there are lives. Identifying streams are essential since many developed cities are situated in the vicinity of these bodies of water and in flood management, it serves as basin for surface runoff within the area. This study aims to process and generate features from high-resolution digital terrain model (DTM) with 1-meter resolution using Hydrology Tools of ArcGIS. The raster was then filled, processed flow direction and accumulation, then raster calculate and provide stream order, converted to vector, and clearing undesirable features using the ancillary or google earth. In field validation streams were classified whether perennial, intermittent or ephemeral. Results show more than 90% of the extracted feature were accurate in assessment through field validation.Keywords: digital terrain models, hydrology tools, strahler method, stream classification
Procedia PDF Downloads 27219522 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 12019521 Modular Robotics and Terrain Detection Using Inertial Measurement Unit Sensor
Authors: Shubhakar Gupta, Dhruv Prakash, Apoorv Mehta
Abstract:
In this project, we design a modular robot capable of using and switching between multiple methods of propulsion and classifying terrain, based on an Inertial Measurement Unit (IMU) input. We wanted to make a robot that is not only intelligent in its functioning but also versatile in its physical design. The advantage of a modular robot is that it can be designed to hold several movement-apparatuses, such as wheels, legs for a hexapod or a quadpod setup, propellers for underwater locomotion, and any other solution that may be needed. The robot takes roughness input from a gyroscope and an accelerometer in the IMU, and based on the terrain classification from an artificial neural network; it decides which method of propulsion would best optimize its movement. This provides the bot with adaptability over a set of terrains, which means it can optimize its locomotion on a terrain based on its roughness. A feature like this would be a great asset to have in autonomous exploration or research drones.Keywords: modular robotics, terrain detection, terrain classification, neural network
Procedia PDF Downloads 14519520 Ductility Spectrum Method for the Design and Verification of Structures
Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour
Abstract:
This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.Keywords: seismic demand, capacity, inelastic spectra, design and structure
Procedia PDF Downloads 39619519 Large Eddy Simulations for Flow Blurring Twin-Fluid Atomization Concept Using Volume of Fluid Method
Authors: Raju Murugan, Pankaj S. Kolhe
Abstract:
The present study is mainly focusing on the numerical simulation of Flow Blurring (FB) twin fluid injection concept was proposed by Ganan-Calvo, which involves back flow atomization based on global bifurcation of liquid and gas streams, thus creating two-phase flow near the injector exit. The interesting feature of FB injector spray is an insignificant effect of variation in atomizing air to liquid ratio (ALR) on a spray cone angle. Besides, FB injectors produce a nearly uniform spatial distribution of mean droplet diameter and are least susceptible to variation in thermo-physical properties of fuels, making it a perfect candidate for fuel flexible combustor development. The FB injector working principle has been realized through experimental flow visualization techniques only. The present study explores potential of ANSYS Fluent based Large Eddy Simulation(LES) with volume of fluid (VOF) method to investigate two-phase flow just upstream of injector dump plane and spray quality immediate downstream of injector dump plane. Note that, water and air represent liquid and gas phase in all simulations and ALR is varied by changing the air mass flow rate alone. Preliminary results capture two phase flow just upstream of injector dump plane and qualitative agreement is observed with the available experimental literature.Keywords: flow blurring twin fluid atomization, large eddy simulation, volume of fluid, air to liquid ratio
Procedia PDF Downloads 21419518 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach
Authors: Aliaksandr Huminski
Abstract:
Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.Keywords: decomposition, labeling, primitive verbs, semantic roles
Procedia PDF Downloads 36719517 Top-Down Construction Method in Concrete Structures: Advantages and Disadvantages of This Construction Method
Authors: Hadi Rouhi Belvirdi
Abstract:
The construction of underground structures using the traditional method, which begins with excavation and the implementation of the foundation of the underground structure, continues with the construction of the main structure from the ground up, and concludes with the completion of the final ceiling, is known as the Bottom-Up Method. In contrast to this method, there is an advanced technique called the Top-Down Method, which has practically replaced the traditional construction method in large projects in industrialized countries in recent years. Unlike the traditional approach, this method starts with the construction of surrounding walls, columns, and the final ceiling and is completed with the excavation and construction of the foundation of the underground structure. Some of the most significant advantages of this method include the elimination or minimization of formwork surfaces, the removal of temporary bracing during excavation, the creation of some traffic facilities during the construction of the structure, and the possibility of using it in limited and high-traffic urban spaces. Despite these numerous advantages, unfortunately, there is still insufficient awareness of this method in our country, to the extent that it can be confidently stated that most stakeholders in the construction industry are unaware of the existence of such a construction method. However, it can be utilized as a very important execution option alongside other conventional methods in the construction of underground structures. Therefore, due to the extensive practical capabilities of this method, this article aims to present a methodology for constructing underground structures based on the aforementioned advanced method to the scientific community of the country, examine the advantages and limitations of this method and their impacts on time and costs, and discuss its application in urban spaces. Finally, some underground structures executed in the Ahvaz urban rail, which are being implemented using this advanced method to the best of our best knowledge, will be introduced.Keywords: top-down method, bottom-up method, underground structure, construction method
Procedia PDF Downloads 1219516 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory
Authors: Xu Jiaqiao
Abstract:
Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments
Procedia PDF Downloads 9419515 Difference Expansion Based Reversible Data Hiding Scheme Using Edge Directions
Authors: Toshanlal Meenpal, Ankita Meenpal
Abstract:
A very important technique in reversible data hiding field is Difference expansion. Secret message as well as the cover image may be completely recovered without any distortion after data extraction process due to reversibility feature. In general, in any difference expansion scheme embedding is performed by integer transform in the difference image acquired by grouping two neighboring pixel values. This paper proposes an improved reversible difference expansion embedding scheme. We mainly consider edge direction for embedding by modifying the difference of two neighboring pixels values. In general, the larger difference tends to bring a degraded stego image quality than the smaller difference. Image quality in the range of 0.5 to 3.7 dB in average is achieved by the proposed scheme, which is shown through the experimental results. However payload wise it achieves almost similar capacity in comparisons with previous method.Keywords: information hiding, wedge direction, difference expansion, integer transform
Procedia PDF Downloads 48419514 Classroom Management Practices of Hotel, Restaurant, and Institution Management Instructors
Authors: Diana Ruth Caga-Anan
Abstract:
Classroom management is a critical skill but the styles are constantly evolving. It is constantly under pressure particularly in the college education level due to diversity in student profiles, modes of delivery, and marketization of higher education. This study sought to analyze the extent of implementation of classroom management practices (CMPs) of the college instructors of the Hotel, Restaurant, and Institution Management of a premier university in the Philippines. It was also determined if their length of teaching affects their classroom management style. A questionnaire with sixteen 'evidenced-based' CMPs grouped into five critical features of classroom management, adopted from the literature search of Simonsen et al. (2008), was administered to 4 instructor-respondents and to their 88 students. Weighted mean scores of each of the CMPs revealed that there were differences between the instructors’ self-scores and their students’ ratings on their implementation of CMPs. The critical feature of classroom management 'actively engage students in observable ways' got the highest mean score, corresponding to 'always' from the instructors’ self-rating and 'frequently' from their students’ ratings. However, 'use a continuum of strategies to respond to inappropriate behaviors' got the lowest scores from both the instructors and their students corresponding only to 'occasionally'. Analysis of variance showed that the only CMP affected by the length of teaching is the practice of 'prompting students to respond'. Based on the findings, some recommendations for the instructors to improve on the critical feature where they scored low are discussed and suggestions are included for future research.Keywords: classroom management, CMPs, critical features, evidence-based classroom management practices
Procedia PDF Downloads 17219513 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 12519512 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners
Authors: Saheed A. Gbadegeshin
Abstract:
Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention
Procedia PDF Downloads 34219511 Critical Comparison of Two Teaching Methods: The Grammar Translation Method and the Communicative Teaching Method
Authors: Aicha Zohbie
Abstract:
The purpose of this paper is to critically compare two teaching methods: the communicative method and the grammar-translation method. The paper presents the importance of language awareness as an approach to teaching and learning language and some challenges that language teachers face. In addition, the paper strives to determine whether the adoption of communicative teaching methods or the grammar teaching method would be more effective to teach a language. A variety of features are considered for comparing the two methods: the purpose of each method, techniques used, teachers’ and students’ roles, the use of L1, the skills that are emphasized, the correction of students’ errors, and the students’ assessments. Finally, the paper includes suggestions and recommendations for implementing an approach that best meets the students’ needs in a classroom.Keywords: language teaching methods, language awareness, communicative method grammar translation method, advantages and disadvantages
Procedia PDF Downloads 15119510 Virulence Phenotypes Among Multi-Drug Resistant Uropathogenic Bacteria
Authors: V. V. Lakshmi, Y. V. S. Annapurna
Abstract:
Urinary tract infection (UTI) is one of the most common infectious diseases seen in the community. Susceptible individuals experience multiple episodes, and progress to acute pyelonephritis or uro-sepsis or develop asymptomatic bacteriuria (ABU). Ability to cause extraintestinal infections depends on several virulence factors required for survival at extraintestinal sites. Presence of virulence phenotypes enhances the pathogenicity of these otherwise commensal organisms and thus augments its ability to cause extraintestinal infections, the most frequent in urinary tract infections(UTI). The present study focuses on detection of the virulence characters exhibited by the uropathogenic organism and most common factors exhibited in the local pathogens. A total of 700 isolates of E.coli and Klebsiella spp were included in the study. These were isolated from patients from local hospitals reported to be suffering with UTI over a period of three years. Isolation and identification was done based on Gram character and IMVIC reactions. Antibiotic sensitivity profile was carried out by disc diffusion method and multi drug resistant strains with MAR index of 0.7 were further selected.. Virulence features examined included their ability to produce exopolysaccharides, protease- gelatinase production, hemolysin production, haemagglutination and hydrophobicity test. Exopolysaccharide production was most predominant virulence feature among the isolates when checked by congo red method. The biofilms production examined by microtitre plates using ELISA reader confirmed that this is the major factor contributing to virulencity of the pathogens followed by hemolysin productionKeywords: Escherichia coli, Klebsiella sp, Uropathogens, Virulence features.
Procedia PDF Downloads 42119509 Numerical Iteration Method to Find New Formulas for Nonlinear Equations
Authors: Kholod Mohammad Abualnaja
Abstract:
A new algorithm is presented to find some new iterative methods for solving nonlinear equations F(x)=0 by using the variational iteration method. The efficiency of the considered method is illustrated by example. The results show that the proposed iteration technique, without linearization or small perturbation, is very effective and convenient.Keywords: variational iteration method, nonlinear equations, Lagrange multiplier, algorithms
Procedia PDF Downloads 54519508 Comparison of Finite-Element and IEC Methods for Cable Thermal Analysis under Various Operating Environments
Authors: M. S. Baazzim, M. S. Al-Saud, M. A. El-Kady
Abstract:
In this paper, steady-state ampacity (current carrying capacity) evaluation of underground power cable system by using analytical and numerical methods for different conditions (depth of cable, spacing between phases, soil thermal resistivity, ambient temperature, wind speed), for two system voltage level were used 132 and 380 kV. The analytical method or traditional method that was used is based on the thermal analysis method developed by Neher-McGrath and further enhanced by International Electrotechnical Commission (IEC) and published in standard IEC 60287. The numerical method that was used is finite element method and it was recourse commercial software based on finite element method.Keywords: cable ampacity, finite element method, underground cable, thermal rating
Procedia PDF Downloads 37919507 Multistage Adomian Decomposition Method for Solving Linear and Non-Linear Stiff System of Ordinary Differential Equations
Authors: M. S. H. Chowdhury, Ishak Hashim
Abstract:
In this paper, linear and non-linear stiff systems of ordinary differential equations are solved by the classical Adomian decomposition method (ADM) and the multi-stage Adomian decomposition method (MADM). The MADM is a technique adapted from the standard Adomian decomposition method (ADM) where standard ADM is converted into a hybrid numeric-analytic method called the multistage ADM (MADM). The MADM is tested for several examples. Comparisons with an explicit Runge-Kutta-type method (RK) and the classical ADM demonstrate the limitations of ADM and promising capability of the MADM for solving stiff initial value problems (IVPs).Keywords: stiff system of ODEs, Runge-Kutta Type Method, Adomian decomposition method, Multistage ADM
Procedia PDF Downloads 43719506 A Method for Measurement and Evaluation of Drape of Textiles
Authors: L. Fridrichova, R. Knížek, V. Bajzík
Abstract:
Drape is one of the important visual characteristics of the fabric. This paper is introducing an innovative method of measurement and evaluation of the drape shape of the fabric. The measuring principle is based on the possibility of multiple vertical strain of the fabric. This method more accurately simulates the real behavior of the fabric in the process of draping. The method is fully automated, so the sample can be measured by using any number of cycles in any time horizon. Using the present method of measurement, we are able to describe the viscoelastic behavior of the fabric.Keywords: drape, drape shape, automated drapemeter, fabric
Procedia PDF Downloads 65619505 Virulence Phenotypes among Multi Drug Resistant Uropathogenic E. Coli and Klebsiella SPP
Authors: V. V. Lakshmi, Y. V. S. Annapurna
Abstract:
Urinary tract infection (UTI) is one of the most common infectious diseases seen in the community. Susceptible individuals experience multiple episodes, and progress to acute pyelonephritis or uro-sepsis or develop asymptomatic bacteriuria (ABU). Ability to cause extraintestinal infections depends on several virulence factors required for survival at extraintestinal sites. Presence of virulence phenotypes enhances the pathogenicity of these otherwise commensal organisms and thus augments its ability to cause extraintestinal infections, the most frequent in urinary tract infections(UTI). The present study focuses on detection of the virulence characters exhibited by the uropathogenic organism and most common factors exhibited in the local pathogens. A total of 700 isolates of E.coli and Klebsiella spp were included in the study.These were isolated from patients from local hospitals reported to be suffering with UTI over a period of three years. Isolation and identification was done based on Gram character and IMVIC reactions. Antibiotic sensitivity profile was carried out by disc diffusion method and multi drug resistant strains with MAR index of 0.7 were further selected. Virulence features examined included their ability to produce exopolysaccharides, protease- gelatinase production, hemolysin production, haemagglutination and hydrophobicity test. Exopolysaccharide production was most predominant virulence feature among the isolates when checked by congo red method. The biofilms production examined by microtitre plates using ELISA reader confirmed that this is the major factor contributing to virulencity of the pathogens followed by hemolysin production.Keywords: Escherichia coli, Klebsiella spp, Uropathogens, virulence features
Procedia PDF Downloads 31919504 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification
Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro
Abstract:
Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification
Procedia PDF Downloads 11619503 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 7719502 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 25719501 Zero-Dissipative Explicit Runge-Kutta Method for Periodic Initial Value Problems
Authors: N. Senu, I. A. Kasim, F. Ismail, N. Bachok
Abstract:
In this paper zero-dissipative explicit Runge-Kutta method is derived for solving second-order ordinary differential equations with periodical solutions. The phase-lag and dissipation properties for Runge-Kutta (RK) method are also discussed. The new method has algebraic order three with dissipation of order infinity. The numerical results for the new method are compared with existing method when solving the second-order differential equations with periodic solutions using constant step size.Keywords: dissipation, oscillatory solutions, phase-lag, Runge-Kutta methods
Procedia PDF Downloads 41119500 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulationsKeywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow
Procedia PDF Downloads 50219499 Reflection on Using Bar Model Method in Learning and Teaching Primary Mathematics: A Hong Kong Case Study
Authors: Chui Ka Shing
Abstract:
This case study research attempts to examine the use of the Bar Model Method approach in learning and teaching mathematics in a primary school in Hong Kong. The objectives of the study are to find out to what extent (a) the Bar Model Method approach enhances the construction of students’ mathematics concepts, and (b) the school-based mathematics curriculum development with adopting the Bar Model Method approach. This case study illuminates the effectiveness of using the Bar Model Method to solve mathematics problems from Primary 1 to Primary 6. Some effective pedagogies and assessments were developed to strengthen the use of the Bar Model Method across year levels. Suggestions including school-based curriculum development for using Bar Model Method and further study were discussed.Keywords: bar model method, curriculum development, mathematics education, problem solving
Procedia PDF Downloads 22019498 An Analytical Method for Bending Rectangular Plates with All Edges Clamped Supported
Authors: Yang Zhong, Heng Liu
Abstract:
The decoupling method and the modified Naiver method are combined for accurate bending analysis of rectangular thick plates with all edges clamped supported. The basic governing equations for Mindlin plates are first decoupled into independent partial differential equations which can be solved separately. Using modified Navier method, the analytic solution of rectangular thick plate with all edges clamped supported is then derived. The solution method used in this paper leave out the complicated derivation for calculating coefficients and obtain the solution to problems directly. Numerical comparisons show the correctness and accuracy of the results at last.Keywords: Mindlin plates, decoupling method, modified Navier method, bending rectangular plates
Procedia PDF Downloads 60019497 Modern Methods of Technology and Organization of Production of Construction Works during the Implementation of Construction 3D Printers
Authors: Azizakhanim Maharramli
Abstract:
The gradual transition from entrenched traditional technology and organization of construction production to innovative additive construction technology inevitably meets technological, technical, organizational, labour, and, finally, social difficulties. Therefore, the chosen nodal method will lead to the elimination of the above difficulties, combining some of the usual methods of construction and the myth in world practice that the labour force is subjected to a strong stream of reduction. The nodal method of additive technology will create favourable conditions for the optimal degree of distribution of labour across facilities due to the consistent performance of homogeneous work and the introduction of additive technology and traditional technology into construction production.Keywords: parallel method, sequential method, stream method, combined method, nodal method
Procedia PDF Downloads 94