Search results for: feature method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19992

Search results for: feature method

19332 Reduced Differential Transform Methods for Solving the Fractional Diffusion Equations

Authors: Yildiray Keskin, Omer Acan, Murat Akkus

Abstract:

In this paper, the solution of fractional diffusion equations is presented by means of the reduced differential transform method. Fractional partial differential equations have special importance in engineering and sciences. Application of reduced differential transform method to this problem shows the rapid convergence of the sequence constructed by this method to the exact solution. The numerical results show that the approach is easy to implement and accurate when applied to fractional diffusion equations. The method introduces a promising tool for solving many fractional partial differential equations.

Keywords: fractional diffusion equations, Caputo fractional derivative, reduced differential transform method, partial

Procedia PDF Downloads 525
19331 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 153
19330 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 512
19329 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 88
19328 Electroencephalogram Based Approach for Mental Stress Detection during Gameplay with Level Prediction

Authors: Priyadarsini Samal, Rajesh Singla

Abstract:

Many mobile games come with the benefits of entertainment by introducing stress to the human brain. In recognizing this mental stress, the brain-computer interface (BCI) plays an important role. It has various neuroimaging approaches which help in analyzing the brain signals. Electroencephalogram (EEG) is the most commonly used method among them as it is non-invasive, portable, and economical. Here, this paper investigates the pattern in brain signals when introduced with mental stress. Two healthy volunteers played a game whose aim was to search hidden words from the grid, and the levels were chosen randomly. The EEG signals during gameplay were recorded to investigate the impacts of stress with the changing levels from easy to medium to hard. A total of 16 features of EEG were analyzed for this experiment which includes power band features with relative powers, event-related desynchronization, along statistical features. Support vector machine was used as the classifier, which resulted in an accuracy of 93.9% for three-level stress analysis; for two levels, the accuracy of 92% and 98% are achieved. In addition to that, another game that was similar in nature was played by the volunteers. A suitable regression model was designed for prediction where the feature sets of the first and second game were used for testing and training purposes, respectively, and an accuracy of 73% was found.

Keywords: brain computer interface, electroencephalogram, regression model, stress, word search

Procedia PDF Downloads 187
19327 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 252
19326 The Construction of Exact Solutions for the Nonlinear Lattice Equation via Coth and Csch Functions Method

Authors: A. Zerarka, W. Djoudi

Abstract:

The method developed in this work uses a generalised coth and csch funtions method to construct new exact travelling solutions to the nonlinear lattice equation. The technique of the homogeneous balance method is used to handle the appropriated solutions.

Keywords: coth functions, csch functions, nonlinear partial differential equation, travelling wave solutions

Procedia PDF Downloads 662
19325 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19

Authors: M. Bilal Ishfaq, Adnan N. Qureshi

Abstract:

COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.

Keywords: COVID-19, feature engineering, artificial neural networks, radiology images

Procedia PDF Downloads 75
19324 Sustainable Development: Evaluation of an Urban Neighborhood

Authors: Harith Mohammed Benbouali

Abstract:

The concept of sustainable development is becoming increasingly important in our society. The efforts of specialized agencies, cleverly portrayed in the media, allow a widespread environmental awareness. Far from the old environmental movement in the backward-looking nostalgia, the environment is combined with today's progress. Many areas now include these concerns in their efforts, this in order to try to reduce the negative impact of human activities on the environment. The quantitative dimension of development has given way to the quality aspect. However, this feature is not common, and the initial target was abandoned in favor of economic considerations. Specialists in the field of building and construction have constantly sought to further integrate the environmental dimension, creating a seal of high environmental quality buildings. The pursuit of well-being of neighborhood residents and the quality of buildings are also a hot topic in planning. Quality of life is considered so on, since financial concerns dominate to the detriment of the environment and the welfare of the occupants. This work concerns the development of an analytical method based on multiple indicators of objectives across the district. The quantification of indicators related to objectives allows the construction professional, the developer or the community, to quantify and compare different alternatives for development of a neighborhood. This quantification is based on the use of simulation tools and a multi-criteria aggregation.

Keywords: sustainable development, environment, district, indicators, multi-criteria analysis, evaluation

Procedia PDF Downloads 312
19323 Learning outside the Box by Using Memory Techniques Skill: Case Study in Indonesia Memory Sports Council

Authors: Muhammad Fajar Suardi, Fathimatufzzahra, Dela Isnaini Sendra

Abstract:

Learning is an activity that has been used to do, especially for a student or academics. But a handful of people have not been using and maximizing their brains work and some also do not know a good brain work time in capturing the lessons, so that knowledge is absorbed is also less than the maximum. Indonesia Memory Sports Council (IMSC) is an institution which is engaged in the performance of the brain and the development of effective learning methods by using several techniques that can be used in considering the lessons and knowledge to grasp well, including: loci method, substitution method, and chain method. This study aims to determine the techniques and benefits of using the method given in learning and memorization by applying memory techniques taught by Indonesia Memory Sports Council (IMSC) to students and the difference if not using this method. This research uses quantitative research with survey method addressed to students of Indonesian Memory Sports Council (IMSC). The results of this study indicate that learn, understand and remember the lesson using the techniques of memory which is taught in Indonesia Memory Sport Council is very effective and faster to absorb the lesson than learning without using the techniques of memory, and this affects the academic achievement of students in each educational institution.

Keywords: chain method, Indonesia memory sports council, loci method, substitution method

Procedia PDF Downloads 290
19322 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63
19321 A Finite Element Method Simulation for Rocket Motor Material Selection

Authors: T. Kritsana, P. Sawitri, P. Teeratas

Abstract:

This article aims to study the effect of pressure on rocket motor case by Finite Element Method simulation to select optimal material in rocket motor manufacturing process. In this study, cylindrical tubes with outside diameter of 122 mm and thickness of 3 mm are used for simulation. Defined rocket motor case materials are AISI4130, AISI1026, AISI1045, AL2024 and AL7075. Internal pressure used for the simulation is 22 MPa. The result from Finite Element Method shows that at a pressure of 22 MPa rocket motor case produced by AISI4130, AISI1045 and AL7075 can be used. A comparison of the result between AISI4130, AISI1045 and AL7075 shows that AISI4130 has minimum principal stress and confirm the results of Finite Element Method by the used of calculation method found that, the results from Finite Element Method has good reliability.

Keywords: rocket motor case, finite element method, principal stress, simulation

Procedia PDF Downloads 449
19320 Anterior Chamber Depth Measured with Orbscan and Pentacam Compared with Smith Method in 102 Phakic Eyes

Authors: Mohammad Ghandehari Motlagh

Abstract:

Purpose: Comparing anterior chamber depth (ACD) measured with Orbscan II and Pentacam HR compared with the Smith method results. Methods: Smith method (1979) is a reliable method of measuring ACD only with help of slit lamp. In this study 102 phakic eyes as PRK candidates were imaged with both OrbScan and Pentacam and finally ACD was measured thru Smith method with slit lamp. ACD measured with Smith method was presumed as the gold standard and was compared with ACD of the 2 imaging devices. Contraindication cases for PRK and pseudophakic eyes have been excluded from the study. Results: Mean age of the patients was 35.2 ±14.8 yrs/old including 56 M(54.9%)and 46 F(45.09%).Acceptable correlation of ACD measured thru Smith method with Orbscan and Pentacam are R=0.958 and R=0.942 respectively and so Orbscan results can be used in procedures relying on ACD. Conclusion: ACDs measured with OrbScan is more precise than Pentacam and so can be more useful in some surgery procedures relying ACD results such as phakic IOLs and in cycloplegia contraindications.

Keywords: orbscan, pentacam, anterior chamber depth, slit lamp

Procedia PDF Downloads 368
19319 Finite Element and Split Bregman Methods for Solving a Family of Optimal Control Problem with Partial Differential Equation Constraint

Authors: Mahmoud Lot

Abstract:

In this article, we will discuss the solution of elliptic optimal control problem. First, by using the nite element method, we obtain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained, and hence it saves time and memory requirement. Then we use the split Bregman method for solving this problem, and examples show the speed and accuracy of split Bregman methods for solving these types of problems. We also use the SQP method for solving the examples and compare with the split Bregman method.

Keywords: Split Bregman Method, optimal control with elliptic partial differential equation constraint, finite element method

Procedia PDF Downloads 152
19318 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance

Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass

Abstract:

For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.

Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance

Procedia PDF Downloads 349
19317 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations

Authors: M. Y. Waziri, M. A. Aliyu

Abstract:

The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.

Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate

Procedia PDF Downloads 638
19316 The Development of Iranian Theatrical Performance through the Integration of Narrative Elements from Western Drama

Authors: Azadeh Abbasikangevari

Abstract:

Background and Objectives: Theatre and performance are two separate themes. What is presented in Iran as a performance is the species and ritual and traditional forms of the play. The Iranian performance has its roots in myth and ritual. Drama is essentially a Western phenomenon that has gradually entered Iran and influenced Iranian performance. A theatre is based on antagonism (axis) and protagonism (anti-axis), while performance has a monotonous and steady motion. The elements of Iranian performance include field, performance on the stage, and magnification in performance, all of which are based on narration. This type of narration has been present in Iranian modern drama. The objective of this study was to analyze the drama structure according to narration elements by a comparison between the Western theater and the Iranian performance and determining the structural differences in the type of narrative. Materials and Methods: In this study, the elements of the drama were analyzed using the library method among the available library resources. The review of the literature included research articles and textbooks which focused on Iranian plays, as well as books and articles which encompassed narrative and drama element. Data were analyzed in the comparative-descriptive method. Results: Examining and studying different kinds of Iranian performances, showed that the narrative has always been a characteristic feature of Iranian plays. Iranians have narrated the stories and myths and have had a particular skill of oral literature. Over time, they slowly introduced narrative culture into their art, where this element is the most important structural element in Iran's dramatic art. Considering the fact that narration in Iranian traditional play, such as Ta'ziyeh and Naghali, was oral and consequently, it was slowly forgotten and excluded from written theatrical texts. Since the drama has entered in its western form in Iran, the plays written by the authors were influenced by narrative elements existing in western plays. Conclusions: The narrative’s element has undoubtedly had an impact on modern Iranian drama and Iranian contemporary drama. Therefore, the element of narration is an integral part of the Iranian traditional play structure.

Keywords: drama methodology, Iranian performance, Iranian modern drama, narration

Procedia PDF Downloads 129
19315 Use of Social Media in Political Communications: Example of Facebook

Authors: Havva Nur Tarakci, Bahar Urhan Torun

Abstract:

The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.

Keywords: Facebook, political communications, social media, electrorate

Procedia PDF Downloads 383
19314 MP-SMC-I Method for Slip Suppression of Electric Vehicles under Braking

Authors: Tohru Kawabe

Abstract:

In this paper, a new SMC (Sliding Mode Control) method with MP (Model Predictive Control) integral action for the slip suppression of EV (Electric Vehicle) under braking is proposed. The proposed method introduce the integral term with standard SMC gain , where the integral gain is optimized for each control period by the MPC algorithms. The aim of this method is to improve the safety and the stability of EVs under braking by controlling the wheel slip ratio. There also include numerical simulation results to demonstrate the effectiveness of the method.

Keywords: sliding mode control, model predictive control, integral action, electric vehicle, slip suppression

Procedia PDF Downloads 561
19313 Assessment of Hargreaves Equation for Estimating Monthly Reference Evapotranspiration in the South of Iran

Authors: Ali Dehgan Moroozeh, B. Farhadi Bansouleh

Abstract:

Evapotranspiration is one of the most important components of the hydrological cycle. Evapotranspiration (ETo) is an important variable in water and energy balances on the earth’s surface, and knowledge of the distribution of ET is a key factor in hydrology, climatology, agronomy and ecology studies. Many researchers have a valid relationship, which is a function of climate factors, to estimate the potential evapotranspiration presented to the plant water stress or water loss, prevent. The FAO-Penman method (PM) had been recommended as a standard method. This method requires many data and these data are not available in every area of world. So, other methods should be evaluated for these conditions. When sufficient or reliable data to solve the PM equation are not available then Hargreaves equation can be used. The Hargreaves equation (HG) requires only daily mean, maximum and minimum air temperature extraterrestrial radiation .In this study, Hargreaves method (HG) were evaluated in 12 stations in the North West region of Iran. Results of HG and M.HG methods were compared with results of PM method. Statistical analysis of this comparison showed that calibration process has had significant effect on efficiency of Hargreaves method.

Keywords: evapotranspiration, hargreaves, equation, FAO-Penman method

Procedia PDF Downloads 395
19312 Limit-Cycles Method for the Navigation and Avoidance of Any Form of Obstacles for Mobile Robots in Cluttered Environment

Authors: F. Boufera, F. Debbat

Abstract:

This paper deals with an approach based on limit-cycles method for the problem of obstacle avoidance of mobile robots in unknown environments for any form of obstacles. The purpose of this approach is the improvement of limit-cycles method in order to obtain safe and flexible navigation. The proposed algorithm has been successfully tested in different configuration on simulation.

Keywords: mobile robot, navigation, avoidance of obstacles, limit-cycles method

Procedia PDF Downloads 429
19311 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 144
19310 Wavelet Method for Numerical Solution of Fourth Order Wave Equation

Authors: A. H. Choudhury

Abstract:

In this paper, a highly accurate numerical method for the solution of one-dimensional fourth-order wave equation is derived. This hyperbolic problem is solved by using semidiscrete approximations. The space direction is discretized by wavelet-Galerkin method, and the time variable is discretized by using Newmark schemes.

Keywords: hyperbolic problem, semidiscrete approximations, stability, Wavelet-Galerkin Method

Procedia PDF Downloads 315
19309 Research and Application of Multi-Scale Three Dimensional Plant Modeling

Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao

Abstract:

Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.

Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition

Procedia PDF Downloads 277
19308 A New Method Presentation for Locating Fault in Power Distribution Feeders Considering DG

Authors: Rahman Dashti, Ehsan Gord

Abstract:

In this paper, an improved impedance based fault location method is proposed. In this method, online fault locating is performed using voltage and current information at the beginning of the feeder. Determining precise fault location in a short time increases reliability and efficiency of the system. The proposed method utilizes information about main component of voltage and current at the beginning of the feeder and distributed generation unit (DGU) in order to precisely locate different faults in acceptable time. To evaluate precision and accuracy of the proposed method, a 13-node is simulated and tested using MATLAB.

Keywords: distribution network, fault section determination, distributed generation units, distribution protection equipment

Procedia PDF Downloads 401
19307 Effect of Graded Levels of Detoxified Jatropha cursas on the Performance Characteristics of Cockerel Birds

Authors: W. S. Lawal, T. Akande

Abstract:

Abstract— Four (4) difference methods were employed to detoxify Jatropha carcas, they were physical method (it include soaking and sun drying) Chemical (the use of methylated sprit, hexane and methane). Biological (the use of Aspergillus niger and then sundry for 7days and then Bacillus lichiformis) and Combined method (the combination of chemical and biological methods). Phobol esther analysis was carried out after the detoxification methods and it was found that the combined method is better off (P<0.05). Detoxified Jatropha from each of this methods was sundry and grinded for easy inclusion into poultry feed, detoxified jatropha was included at 0%, 0.5%, 1%, 2%, 3%, 4%, and 5% but the combined method was increased up to 7% because the birds were able to tolerate it, the 0% was the control experiment. 405 day old broiler chicks was used to test the effect of detoxified Jatropha carcas on their performance, there are 5birds per treatment and there are 3 replicates, the experiment lasted for 8weeks,highest number of mortality was obtained in physical method, birds in chemical method tolerated up to 3% Jatropha carcas, biological method is better, as birds there were comfortable at 5% but the best of them is combined method the birds did very well at 7% as there were less mortality and highest weight gain was achieved here (P<0.05) and it was recommended.

Keywords: phobol esther, inclusion level, tolerance level, Jatropha carcas

Procedia PDF Downloads 404
19306 Design and Analysis of Hybrid Morphing Smart Wing for Unmanned Aerial Vehicles

Authors: Chetan Gupta, Ramesh Gupta

Abstract:

Unmanned aerial vehicles, of all sizes, are prime targets of the wing morphing concept as their lightweight structures demand high aerodynamic stability while traversing unsteady atmospheric conditions. In this research study, a hybrid morphing technology is developed to aid the trailing edge of the aircraft wing to alter its camber as a monolithic element rather than functioning as conventional appendages like flaps. Kinematic tailoring, actuation techniques involving shape memory alloys (SMA), piezoelectrics – individually fall short of providing a simplistic solution to the conundrum of morphing aircraft wings. On the other hand, the feature of negligible hysteresis while actuating using compliant mechanisms has shown higher levels of applicability and deliverability in morphing wings of even large aircrafts. This research paper delves into designing a wing section model with a periodic, multi-stable compliant structure requiring lower orders of topological optimization. The design is sub-divided into three smaller domains with external hyperelastic connections to achieve deflections ranging from -15° to +15° at the trailing edge of the wing. To facilitate this functioning, a hybrid actuation system by combining the larger bandwidth feature of piezoelectric macro-fibre composites and relatively higher work densities of shape memory alloy wires are used. Finite element analysis is applied to optimize piezoelectric actuation of the internal compliant structure. A coupled fluid-surface interaction analysis is conducted on the wing section during morphing to study the development of the velocity boundary layer at low Reynold’s numbers of airflow.

Keywords: compliant mechanism, hybrid morphing, piezoelectrics, shape memory alloys

Procedia PDF Downloads 311
19305 Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

Authors: Hadi Farhadian, Homayoon Katibeh

Abstract:

In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above-mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these two sections there are some significant discrepancies between numerical and analytical results mainly originated from model geometry and high overburden. SGR and the analytical and numerical calculations, confirm the high concentration of seepage inflow in fault zones. Maximum seepage flow into tunnel has been estimated 0.425 lit/sec/m using analytical method and 0.628 lit/sec/m using numerical method occurred in crashed zone. Based on SGR method, six sections of 14 sections in Amirkabir tunnel axis are found to be in "No Risk" class that is supported by the analytical and numerical seepage value of less than 0.04 lit/sec/m.

Keywords: water Seepage, Amirkabir Tunnel, analytical method, DEM, SGR

Procedia PDF Downloads 476
19304 On the Derivation of Variable Step BBDF for Solving Second Order Stiff ODEs

Authors: S. A. M. Yatim, Z. B. Ibrahim, K. I. Othman, M. Suleiman

Abstract:

The method of solving second order stiff ordinary differential equation (ODEs) that is based on backward differentiation formula (BDF) is considered in this paper. We derived the method by increasing the order of the existing method using an improved strategy in choosing the step size. Numerical results are presented to compare the efficiency of the proposed method to the MATLAB’s suite of ODEs solvers namely ode15s and ode23s. The method was found to be efficient to solve second order ordinary differential equation.

Keywords: backward differentiation formulae, block backward differentiation formulae, stiff ordinary differential equation, variable step size

Procedia PDF Downloads 497
19303 Determination of the Minimum Time and the Optimal Trajectory of a Moving Robot Using Picard's Method

Authors: Abbes Lounis, Kahina Louadj, Mohamed Aidene

Abstract:

This paper presents an optimal control problem applied to a robot; the problem is to determine a command which makes it possible to reach a final state from a given initial state in record time. The approach followed to solve this optimization problem with constraints on the control starts by presenting the equations of motion of the dynamic system then by applying Pontryagin's maximum principle (PMP) to determine the optimal control, and Picard's successive approximation method combined with the shooting method to solve the resulting differential system.

Keywords: robotics, Pontryagin's Maximum Principle, PMP, Picard's method, shooting method, non-linear differential systems

Procedia PDF Downloads 254