Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9585

Search results for: algorithm techniques

7245 A Phishing Email Detection Approach Using Machine Learning Techniques

Authors: Kenneth Fon Mbah, Arash Habibi Lashkari, Ali A. Ghorbani

Abstract:

Phishing e-mails are a security issue that not only annoys online users, but has also resulted in significant financial losses for businesses. Phishing advertisements and pornographic e-mails are difficult to detect as attackers have been becoming increasingly intelligent and professional. Attackers track users and adjust their attacks based on users’ attractions and hot topics that can be extracted from community news and journals. This research focuses on deceptive Phishing attacks and their variants such as attacks through advertisements and pornographic e-mails. We propose a framework called Phishing Alerting System (PHAS) to accurately classify e-mails as Phishing, advertisements or as pornographic. PHAS has the ability to detect and alert users for all types of deceptive e-mails to help users in decision making. A well-known email dataset has been used for these experiments and based on previously extracted features, 93.11% detection accuracy is obtainable by using J48 and KNN machine learning techniques. Our proposed framework achieved approximately the same accuracy as the benchmark while using this dataset.

Keywords: phishing e-mail, phishing detection, anti phishing, alarm system, machine learning

Procedia PDF Downloads 324
7244 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration-free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results are in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes. Semi-Lagrangian method, iteration-free method, nonlinear advection-diffusion equation, second-order backward difference formula

Keywords: Semi-Lagrangian method, iteration free method, nonlinear advection-diffusion equation, second-order backward difference formula

Procedia PDF Downloads 310
7243 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling

Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo

Abstract:

Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.

Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling

Procedia PDF Downloads 278
7242 Case for Simulating Consumer Response to Feed in Tariff Based on Socio-Economic Parameters

Authors: Fahad Javed, Tasneem Akhter, Maria Zafar, Adnan Shafique

Abstract:

Evaluation and quantification of techniques is critical element of research and development of technology. Simulations and models play an important role in providing the tools for such assessments. When we look at technologies which impact or is dependent on an average Joe consumer then modeling the socio-economic and psychological aspects of the consumer also gain an importance. For feed in tariff for home consumers which is being deployed for average consumer may force many consumers to be adapters of the technology. Understanding how consumers will adapt this technologies thus hold as much significance as evaluating how the techniques would work in consumer agnostic scenarios. In this paper we first build the case for simulators which accommodate socio-economic realities of the consumers to evaluate smart grid technologies, provide a glossary of data that can aid in this effort and present an abstract model to capture and simulate consumers' adaptation and behavioral response to smart grid technologies. We provide a case study to express the power of such simulators.

Keywords: smart grids, simulation, socio-economic parameters, feed in tariff (FiT), forecasting

Procedia PDF Downloads 344
7241 Research Trends in Fine Arts Education Dissertations in Turkey

Authors: Suzan Duygu Bedir Erişti

Abstract:

The present study tried to make a general evaluation of the dissertations conducted in the last decade in the field of art education in the Department of Fine Arts Education in the Institutes of Education Sciences in Turkey. In the study, most of the universities which involved an Institute of Education Sciences within their bodies in Turkey were reached. As a result, a total of a hundred dissertations conducted in the departments of Fine Arts Education at several universities (Anadolu, Gazi, Ankara, Marmara, Dokuz Eylul, Ondokuz Mayıs, Selcuk and Necmettin Erbakan) were determined via the open access systems of universities as well as via the Thesis Search System of Higher Education Council. Most of the dissertations were reached via the latter system, and in cases of failure, the dissertations were reached via the former system. Consequently, most of the dissertations which did not have any access restriction and which had appropriate content were reached. The dissertations reached were examined based on document analysis in terms of their research topics, research paradigms, contents, purposes, methodologies, data collection tools, and analysis techniques. The dissertations conducted in institutes of Education Sciences could be said to have demonstrated a development, especially in recent years with respect to their qualities. It was also found that a great majority of the dissertations were carried out at Gazi University and Marmara University and that a similar number of dissertations were conducted in other universities. When all the dissertations were taken into account, in general, they were found to differ a lot in their subject areas. In most of the dissertations, the quantitative paradigm was adopted, while especially in recent years, more importance has been given to methods based on the qualitative paradigm. In addition, most of the dissertations conducted with quantitative paradigm were structured based on the general survey model and experimental research model. In terms of statistical techniques, university-focused approaches were used. In some universities, advanced statistical techniques were applied, while in some other universities, there was a moderate use of statistical techniques. Most of the studies produced results generalizable to the levels of postgraduate education and elementary school education. The studies were generally structured in face-to-face teaching processes, while some of them were designed in environments which did not include results generalizable to the face-to-face education system. In the present study, it was seen that the dissertations conducted in the departments of Fine Arts Education at the Institutes of Education Sciences in Turkey did not involve application-based approaches which included art-based or visual research in terms of either research topic or methodology.

Keywords: fine arts education, dissertations, evaluation of dissertations, research trends in fine arts education

Procedia PDF Downloads 184
7240 Usage the Point Analysis Algorithm (SANN) on Drought Analysis

Authors: Khosro Shafie Motlaghi, Amir Reza Salemian

Abstract:

In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.

Keywords: analysis, algorithm, SANN, ET0

Procedia PDF Downloads 280
7239 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 79
7238 Spatial Relationship of Drug Smuggling Based on Geographic Information System Knowledge Discovery Using Decision Tree Algorithm

Authors: S. Niamkaeo, O. Robert, O. Chaowalit

Abstract:

In this investigation, we focus on discovering spatial relationship of drug smuggling along the northern border of Thailand. Thailand is no longer a drug production site, but Thailand is still one of the major drug trafficking hubs due to its topographic characteristics facilitating drug smuggling from neighboring countries. Our study areas cover three districts (Mae-jan, Mae-fahluang, and Mae-sai) in Chiangrai city and four districts (Chiangdao, Mae-eye, Chaiprakarn, and Wienghang) in Chiangmai city where drug smuggling of methamphetamine crystal and amphetamine occurs mostly. The data on drug smuggling incidents from 2011 to 2017 was collected from several national and local published news. Geo-spatial drug smuggling database was prepared. Decision tree algorithm was applied in order to discover the spatial relationship of factors related to drug smuggling, which was converted into rules using rule-based system. The factors including land use type, smuggling route, season and distance within 500 meters from check points were found that they were related to drug smuggling in terms of rules-based relationship. It was illustrated that drug smuggling was occurred mostly in forest area in winter. Drug smuggling exhibited was discovered mainly along topographic road where check points were not reachable. This spatial relationship of drug smuggling could support the Thai Office of Narcotics Control Board in surveillance drug smuggling.

Keywords: decision tree, drug smuggling, Geographic Information System, GIS knowledge discovery, rule-based system

Procedia PDF Downloads 157
7237 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 208
7236 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 43
7235 Contrast Enhancement of Color Images with Color Morphing Approach

Authors: Javed Khan, Aamir Saeed Malik, Nidal Kamel, Sarat Chandra Dass, Azura Mohd Affandi

Abstract:

Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.

Keywords: contrast enhacement, normalized RGB, adaptive histogram equalization, cumulative variance.

Procedia PDF Downloads 359
7234 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm

Authors: Sundara Subramanian Karuppasamy, Che Hua Yang

Abstract:

In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.

Keywords: laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging

Procedia PDF Downloads 111
7233 Improvement of Bearing Capacity of Soft Clay Using Geo-Cells

Authors: Siddhartha Paul, Aman Harlalka, Ashim K. Dey

Abstract:

Soft clayey soil possesses poor bearing capacity and high compressibility because of which foundations cannot be directly placed over soft clay. Normally pile foundations are constructed to carry the load through the soft soil up to the hard stratum below. Pile construction is costly and time consuming. In order to increase the properties of soft clay, many ground improvement techniques like stone column, preloading with and without sand drains/band drains, etc. are in vogue. Time is a constraint for successful application of these improvement techniques. Another way to improve the bearing capacity of soft clay and to reduce the settlement possibility is to apply geocells below the foundation. The geocells impart rigidity to the foundation soil, reduce the net load intensity on soil and thus reduce the compressibility. A well designed geocell reinforced soil may replace the pile foundation. The present paper deals with the applicability of geocells on improvement of the bearing capacity. It is observed that a properly designed geocell may increase the bearing capacity of soft clay up to two and a half times.

Keywords: bearing capacity, geo-cell, ground improvement, soft clay

Procedia PDF Downloads 304
7232 Management of H. Armigera by Using Various Techniques

Authors: Ajmal Khan Kassi, Humayun Javed, Syed Abdul Qadeem

Abstract:

The study was conducted to find out the best management practices against American bollworm on Okra variety Arka Anamika during 2016. The three different management practices viz. Release of Trichogramma chilonis, hoeing and weeding, clipping and lufenuron insect growth regulator (IGR) which were tested individually and with all possible combinations for the controlling of American bollworm at 3 diverse areas viz. University Research Farm Koont, NARC and Farmer Field Taxila. All the treatment combinations regarding damage of fruit showed significant results. The minimum fruit infestation i.e. 3.20% and 3.58% was recorded with combined treatment (i.e. T. chilonis + hoeing + weeding + lufenuron) in two different localities. This combined treatment also resulted in maximum yield at NARC and Taxila i.e. 57.67 and 62.66 q/ha respectively. This treatment gave the best results to manage H. armigera. On the basis of different integrated pest management techniques, Arka Anamika variety proved to be comparatively resistant against H. armigera in different localities. So this variety is recommended for the cultivation in Pothwar region to get maximum yield.

Keywords: management, american bollworm, arka anamika, okra

Procedia PDF Downloads 40
7231 Carbon-Based Electrochemical Detection of Pharmaceuticals from Water

Authors: M. Ardelean, F. Manea, A. Pop, J. Schoonman

Abstract:

The presence of pharmaceuticals in the environment and especially in water has gained increasing attention. They are included in emerging class of pollutants, and for most of them, legal limits have not been set-up due to their impact on human health and ecosystem was not determined and/or there is not the advanced analytical method for their quantification. In this context, the development of various advanced analytical methods for the quantification of pharmaceuticals in water is required. The electrochemical methods are known to exhibit the great potential for high-performance analytical methods but their performance is in direct relation to the electrode material and the operating techniques. In this study, two types of carbon-based electrodes materials, i.e., boron-doped diamond (BDD) and carbon nanofiber (CNF)-epoxy composite electrodes have been investigated through voltammetric techniques for the detection of naproxen in water. The comparative electrochemical behavior of naproxen (NPX) on both BDD and CNF electrodes was studied by cyclic voltammetry, and the well-defined peak corresponding to NPX oxidation was found for each electrode. NPX oxidation occurred on BDD electrode at the potential value of about +1.4 V/SCE (saturated calomel electrode) and at about +1.2 V/SCE for CNF electrode. The sensitivities for NPX detection were similar for both carbon-based electrode and thus, CNF electrode exhibited superiority in relation to the detection potential. Differential-pulsed voltammetry (DPV) and square-wave voltammetry (SWV) techniques were exploited to improve the electroanalytical performance for the NPX detection, and the best results related to the sensitivity of 9.959 µA·µM-1 were achieved using DPV. In addition, the simultaneous detection of NPX and fluoxetine -a very common antidepressive drug, also present in water, was studied using CNF electrode and very good results were obtained. The detection potential values that allowed a good separation of the detection signals together with the good sensitivities were appropriate for the simultaneous detection of both tested pharmaceuticals. These results reclaim CNF electrode as a valuable tool for the individual/simultaneous detection of pharmaceuticals in water.

Keywords: boron-doped diamond electrode, carbon nanofiber-epoxy composite electrode, emerging pollutans, pharmaceuticals

Procedia PDF Downloads 266
7230 Estimating Pile Toe Levels for Capacity Assessment of Piers and Wharves in the Philippines

Authors: Ailvy Faith Zamora, Serj Donn David, Michael Anderson

Abstract:

There are a number of decades-old piers and wharves in Manila, Philippines, that are currently being used for container and bulk cargo handling port operations. These structures fulfill a very important role in the economy and hence have undergone rehabilitation and assessment of capacity to accommodate current and future operational requirements. The capacity assessment would include structural and pile geotechnical evaluation. Unfortunately, old marine structures in the Philippines may not have a complete set of as-built information. In certain instances, critical information, such as pile toe levels, is missing in the documentation. A combination of direct tests, geophysical tests, and numerical analysis/modelling has been performed to estimate existing pile toe levels of open-type piers and anchored quay wall wharves in Manila. These techniques were applied to both concrete and steel piles. This paper presents the tools utilized, testing setup, and techniques used for estimating toe levels of existing piles for certain structures, including the challenges encountered and applied solutions.

Keywords: geophysical testing, pile toe level, structural assessment, piers, wharves

Procedia PDF Downloads 105
7229 A Study of Using Different Printed Circuit Board Design Methods on Ethernet Signals

Authors: Bahattin Kanal, Nursel Akçam

Abstract:

Data transmission size and frequency requirements are increasing rapidly in electronic communication protocols. Increasing data transmission speeds have made the design of printed circuit boards much more important. It is important to carefully examine the requirements and make analyses before and after the design of the digital electronic circuit board. It delves into impedance matching techniques, signal trace routing considerations, and the impact of layer stacking on signal performance. The paper extensively explores techniques for minimizing crosstalk issues and interference, presenting a holistic perspective on design strategies to optimize the quality of high-speed signals. Through a comprehensive review of these design methodologies, this study aims to provide insights into achieving reliable and high-performance printed circuit board layouts for these signals. In this study, the effect of different design methods on Ethernet signals was examined from the type of S parameters. Siemens company HyperLynx software tool was used for the analyses.

Keywords: HyperLynx, printed circuit board, s parameters, ethernet

Procedia PDF Downloads 13
7228 Insights into Archaeological Human Sample Microbiome Using 16S rRNA Gene Sequencing

Authors: Alisa Kazarina, Guntis Gerhards, Elina Petersone-Gordina, Ilva Pole, Viktorija Igumnova, Janis Kimsis, Valentina Capligina, Renate Ranka

Abstract:

Human body is inhabited by a vast number of microorganisms, collectively known as the human microbiome, and there is a tremendous interest in evolutionary changes in human microbial ecology, diversity and function. The field of paleomicrobiology, study of ancient human microbiome, is powered by modern techniques of Next Generation Sequencing (NGS), which allows extracting microbial genomic data directly from archaeological sample of interest. One of the major techniques is 16S rRNA gene sequencing, by which certain 16S rRNA gene hypervariable regions are being amplified and sequenced. However, some limitations of this method exist including the taxonomic precision and efficacy of different regions used. The aim of this study was to evaluate the phylogenetic sensitivity of different 16S rRNA gene hypervariable regions for microbiome studies in the archaeological samples. Towards this aim, archaeological bone samples and corresponding soil samples from each burial environment were collected in Medieval cemeteries in Latvia. The Ion 16S™ Metagenomics Kit targeting different 16S rRNA gene hypervariable regions was used for library construction (Ion Torrent technologies). Sequenced data were analysed by using appropriate bioinformatic techniques; alignment and taxonomic representation was done using Mothur program. Sequences of most abundant genus were further aligned to E. coli 16S rRNA gene reference sequence using MEGA7 in order to identify the hypervariable region of the segment of interest. Our results showed that different hypervariable regions had different discriminatory power depending on the groups of microbes, as well as the nature of samples. On the basis of our results, we suggest that wider range of primers used can provide more accurate recapitulation of microbial communities in archaeological samples. Acknowledgements. This work was supported by the ERAF grant Nr. 1.1.1.1/16/A/101.

Keywords: 16S rRNA gene, ancient human microbiome, archaeology, bioinformatics, genomics, microbiome, molecular biology, next-generation sequencing

Procedia PDF Downloads 176
7227 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 39
7226 Enhancing Sensitivity in Multifrequency Atomic Force Microscopy

Authors: Babak Eslami

Abstract:

Bimodal and trimodal AFM have provided additional capabilities to scanning probe microscopy characterization techniques. These capabilities have specifically enhanced material characterization of surfaces and provided subsurface imaging in addition to conventional topography images. Bimodal and trimodal AFM, being different techniques of multifrequency AFM, are based on exciting the cantilever’s fundamental eigenmode with second and third eigenmodes simultaneously. Although higher eigenmodes provide a higher number of observables that can provide additional information about the sample, they cause experimental challenges. In this work, different experimental approaches for enhancing AFM images in multifrequency for different characterization goals are provided. The trade-offs between eigenmodes including the advantages and disadvantages of using each mode for different samples (ranging from stiff to soft matter) in both air and liquid environments are provided. Additionally, the advantage of performing conventional single tapping mode AFM with higher eigenmodes of the cantilever in order to reduce sample indentation is discussed. These analyses are performed on widely used polymers such as polystyrene, polymethyl methacrylate and air nanobubbles on different surfaces in both air and liquid.

Keywords: multifrequency, sensitivity, soft matter, polymer

Procedia PDF Downloads 126
7225 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 120
7224 Concept of Using an Indicator to Describe the Quality of Fit of Clothing to the Body Using a 3D Scanner and CAD System

Authors: Monika Balach, Iwona Frydrych, Agnieszka Cichocka

Abstract:

The objective of this research is to develop an algorithm, taking into account material type and body type that will describe the fabric properties and quality of fit of a garment to the body. One of the objectives of this research is to develop a new algorithm to simulate cloth draping within CAD/CAM software. Existing virtual fitting does not accurately simulate fabric draping behaviour. Part of the research into virtual fitting will focus on the mechanical properties of fabrics. Material behaviour depends on many factors including fibre, yarn, manufacturing process, fabric weight, textile finish, etc. For this study, several different fabric types with very different mechanical properties will be selected and evaluated for all of the above fabric characteristics. These fabrics include woven thick cotton fabric which is stiff and non-bending, woven with elastic content, which is elastic and bends on the body. Within the virtual simulation, the following mechanical properties can be specified: shear, bending, weight, thickness, and friction. To help calculate these properties, the KES system (Kawabata) can be used. This system was originally developed to calculate the mechanical properties of fabric. In this research, the author will focus on three properties: bending, shear, and roughness. This study will consider current research using the KES system to understand and simulate fabric folding on the virtual body. Testing will help to determine which material properties have the largest impact on the fit of the garment. By developing an algorithm which factors in body type, material type, and clothing function, it will be possible to determine how a specific type of clothing made from a particular type of material will fit on a specific body shape and size. A fit indicator will display areas of stress on the garment such as shoulders, chest waist, hips. From this data, CAD/CAM software can be used to develop garments that fit with a very high degree of accuracy. This research, therefore, aims to provide an innovative solution for garment fitting which will aid in the manufacture of clothing. This research will help the clothing industry by cutting the cost of the clothing manufacturing process and also reduce the cost spent on fitting. The manufacturing process can be made more efficient by virtual fitting of the garment before the real clothing sample is made. Fitting software could be integrated into clothing retailer websites allowing customers to enter their biometric data and determine how the particular garment and material type would fit their body.

Keywords: 3D scanning, fabric mechanical properties, quality of fit, virtual fitting

Procedia PDF Downloads 161
7223 Utility of Executive Function Training in Typically Developing Adolescents and Special Populations: A Systematic Review of the Literature

Authors: Emily C. Shepard, Caroline Sweeney, Jessica Grimm, Sophie Jacobs, Lauren Thompson, Lisa L. Weyandt

Abstract:

Adolescence is a critical phase of development in which individuals are prone to more risky behavior while also facing potentially life-changing decisions. The balance of increased behavioral risk and responsibility indicates the importance of executive functioning ability. In recent years, executive function training has emerged as a technique to enhance this cognitive ability. The aim of the present systematic review was to discuss the reported efficacy of executive functioning training techniques among adolescents. After reviewing 3110 articles, a total of 24 articles were identified which examined the role of executive functioning training techniques among adolescents (age 10-19). Articles retrieved demonstrated points of comparison across psychiatric and medical diagnosis, location of training, and stage of adolescence. Typically developing samples, as well as those with attention-deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), conduct disorder, and physical health concerns were found, allowing for the comparison of the efficacy of techniques considering physical and psychological heterogeneity. Among typically developing adolescents, executive functioning training yielded nonsignificant or low effect size improvements in executive functioning, and in some cases executive functioning ability was decreased following the training. In special populations, including those with ADHD, (ASD), conduct disorder, and physical health concerns significant differences and larger effect sizes in executive functioning were seen following treatment, particularly among individuals with ADHD. Future research is needed to identify the long-term efficacy of these treatments, as well as their generalizability to real-world conditions.

Keywords: adolescence, attention-deficit hyperactivity disorder, executive function, executive function training, traumatic brain injury

Procedia PDF Downloads 168
7222 Autonomous Ground Vehicle Navigation Based on a Single Camera and Image Processing Methods

Authors: Auday Al-Mayyahi, Phil Birch, William Wang

Abstract:

A vision system-based navigation for autonomous ground vehicle (AGV) equipped with a single camera in an indoor environment is presented. A proposed navigation algorithm has been utilized to detect obstacles represented by coloured mini- cones placed in different positions inside a corridor. For the recognition of the relative position and orientation of the AGV to the coloured mini cones, the features of the corridor structure are extracted using a single camera vision system. The relative position, the offset distance and steering angle of the AGV from the coloured mini-cones are derived from the simple corridor geometry to obtain a mapped environment in real world coordinates. The corridor is first captured as an image using the single camera. Hence, image processing functions are then performed to identify the existence of the cones within the environment. Using a bounding box surrounding each cone allows to identify the locations of cones in a pixel coordinate system. Thus, by matching the mapped and pixel coordinates using a projection transformation matrix, the real offset distances between the camera and obstacles are obtained. Real time experiments in an indoor environment are carried out with a wheeled AGV in order to demonstrate the validity and the effectiveness of the proposed algorithm.

Keywords: autonomous ground vehicle, navigation, obstacle avoidance, vision system, single camera, image processing, ultrasonic sensor

Procedia PDF Downloads 287
7221 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 61
7220 Analysis of the Touch and Step Potential Characteristics of an Earthing System Based on Finite Element Method

Authors: Nkwa Agbor Etobi Arreneke

Abstract:

A well-designed earthing/grounding system will not only provide an effective path for direct dissipation of faulty currents into the earth/soil, but also ensure the safety of personnels withing and around its immediate surrounding perimeter is free from the possibility of fatal electric shock. In order to achieve the latter, it is of paramount importance to ensuring that both the step and touch potentials are kept within the allowable tolerance set by standards IEEE Std-80-2000. In this article, the step and touch potentials of an earthing system are simulated and conformity verified using the Finite Element Method (FEM), and has been found to be 242.4V and 194.80V respectively. The effect of injection current position is also analyzed to observe its effect on a person within or in contact with any active part of the earthing system of the substation. The values obtained closely matches those of other published works which made using different numerical methods and/or simulations Genetic Algorithm (GA). This current study is aimed at throwing more light to the dangers of step and touch potential of earthing systems of substation and electrical facilities as a whole, and the need for further in-dept analysis of these parameters. Observations made on this current paper shows that, the position of contact with an energize earthing system is of paramount important in determining its effect on living organisms in contact with any energized part of the earthing systems.

Keywords: earthing/grounding systems, finite element method (fem), ground/earth resistance, safety, touch and step potentials, generic algorithm

Procedia PDF Downloads 82
7219 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia

Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar

Abstract:

The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.

Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed

Procedia PDF Downloads 61
7218 Impact of Different Modulation Techniques on the Performance of Free-Space Optics

Authors: Naman Singla, Ajay Pal Singh Chauhan

Abstract:

As the demand for providing high bit rate and high bandwidth is increasing at a rapid rate so there is a need to see in this problem and finds a technology that provides high bit rate and also high bandwidth. One possible solution is by use of optical fiber. Optical fiber technology provides high bandwidth in THz. But the disadvantage of optical fiber is of high cost and not used everywhere because it is not possible to reach all the locations on the earth. Also high maintenance required for usage of optical fiber. It puts a lot of cost. Another technology which is almost similar to optical fiber is Free Space Optics (FSO) technology. FSO is the line of sight technology where modulated optical beam whether infrared or visible is used to transfer information from one point to another through the atmosphere which works as a channel. This paper concentrates on analyzing the performance of FSO in terms of bit error rate (BER) and quality factor (Q) using different modulation techniques like non return to zero on off keying (NRZ-OOK), differential phase shift keying (DPSK) and differential quadrature phase shift keying (DQPSK) using OptiSystem software. The findings of this paper show that FSO system based on DQPSK modulation technique performs better.

Keywords: attenuation, bit rate, free space optics, link length

Procedia PDF Downloads 334
7217 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 126
7216 Delineation of the Geoelectric and Geovelocity Parameters in the Basement Complex of Northwestern Nigeria

Authors: M. D. Dogara, G. C. Afuwai, O. O. Esther, A. M. Dawai

Abstract:

The geology of Northern Nigeria is under intense investigation particularly that of the northwest believed to be of the basement complex. The variability of the lithology is consistently inconsistent. Hence, the need for a close range study, it is, in view of the above that, two geophysical techniques, the vertical electrical sounding employing the Schlumberger array and seismic refraction methods, were used to delineate the geoelectric and geovelocity parameters of the basement complex of northwestern Nigeria. A total area of 400,000 m² was covered with sixty geoelectric stations established and sixty sets of seismic refraction data collected using the forward and reverse method. From the interpretation of the resistivity data, it is suggestive that the area is underlain by not more than five geoelectric layers of varying thicknesses and resistivities when a maximum half electrode spread of 100m was used. The result of the interpreted seismic data revealed two geovelocity layers, with velocities ranging between 478m/s to 1666m/s for the first layer and 1166m/s to 7141m/s for the second layer. The results of the two techniques, suggests that the area of study has an undulating bedrock topography with geoeletric and geovelocity layers composed of weathered rock materials.

Keywords: basement complex, delineation, geoelectric, geovelocity, Nigeria

Procedia PDF Downloads 135