Search results for: ant colony algorithms
1630 Discrete Group Search Optimizer for the Travelling Salesman Problem
Authors: Raed Alnajjar, Mohd Zakree, Ahmad Nazri
Abstract:
In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset. Procedia PDF Downloads 3241629 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 611628 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System
Authors: Rafal Michalski, Jakub Zygadlo
Abstract:
We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.Keywords: atomic matters, crystal electric field (CEF) spin-orbit coupling, localized states, electron subshell, fine electronic structure
Procedia PDF Downloads 3191627 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks
Authors: Shidrokh Goudarzi, Wan Haslina Hassan
Abstract:
Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms
Procedia PDF Downloads 3931626 Effects of Gamma Radiation on Tomato Leafminer, Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae)
Authors: Akın Kuyulu, Hanife Genç
Abstract:
In present study, it was aimed to evaluate the gamma radiation impacts on tomato leaf miner at different biological stages. The laboratory colony of tomato leaf miner was used to set up the experiments. Different biological stages of the insects (eggs, 4th instars and pupae) were irradiated using Cobalt-60 at doses of 0 (control), 100 Gray (Gy), 200 Gy, 300 Gy and 400 Gy in Cos-44HH-N source, at dose rate of 480 Gy/h. After irradiation, the eggs were incubated until hatching; the mature larvae were reared to complete their developments. Adult emergences from irradiated pupae were also evaluated. The results showed that there were no egg hatching at all tested irradiation doses. Although, the pupal percentages of irradiated mature larvae were 54%, 15% and 8% at doses of 100 Gy, 200 Gy and 300 Gy respectively, there were no adult emergences from irradiated mature larvae. On the other hand, the adult emergences were observed from irradiated pupae, decreased as radiation doses increased along with malformed adult appearance. Male and female individuals were out crossed with laboratory reared adults. Fecundity was correlated with radiation doses.Keywords: irradiation, tomato, tomato leafminer, Tuta absoluta
Procedia PDF Downloads 2421625 Optimal Hybrid Linear and Nonlinear Control for a Quadcopter Drone
Authors: Xinhuang Wu, Yousef Sardahi
Abstract:
A hybrid and optimal multi-loop control structure combining linear and nonlinear control algorithms are introduced in this paper to regulate the position of a quadcopter unmanned aerial vehicle (UAV) driven by four brushless DC motors. To this end, a nonlinear mathematical model of the UAV is derived and then linearized around one of its operating points. Using the nonlinear version of the model, a sliding mode control is used to derive the control laws of the motor thrust forces required to drive the UAV to a certain position. The linear model is used to design two controllers, XG-controller and YG-controller, responsible for calculating the required roll and pitch to maneuver the vehicle to the desired X and Y position. Three attitude controllers are designed to calculate the desired angular rates of rotors, assuming that the Euler angles are minimal. After that, a many-objective optimization problem involving 20 design parameters and ten objective functions is formulated and solved by HypE (Hypervolume estimation algorithm), one of the widely used many-objective optimization algorithms approaches. Both stability and performance constraints are imposed on the optimization problem. The optimization results in terms of Pareto sets and fronts are obtained and show that some of the design objectives are competing. That is, when one objective goes down, the other goes up. Also, Numerical simulations conducted on the nonlinear UAV model show that the proposed optimization method is quite effective.Keywords: optimal control, many-objective optimization, sliding mode control, linear control, cascade controllers, UAV, drones
Procedia PDF Downloads 731624 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 861623 Delivery of Doxorubicin to Glioblastoma Multiforme Using Solid Lipid Nanoparticles with Surface Aprotinin and Melanotransferrin Antibody for Enhanced Chemotherapy
Authors: Yung-Chih Kuo, I-Hsuan Lee
Abstract:
Solid lipid nanoparticles (SLNs) conjugated with aprotinin (Apr) and melanotransferrin antibody (Anti-MTf) were used to carry doxorubicin (Dox) across the blood–brain barrier (BBB) for glioblastoma multiforme (GBM) chemotherapy. Dox-entrapped SLNs with grafted Apr and Anti-MTf (Apr-Anti-MTf-Dox-SLNs) were applied to a cultured monolayer comprising human brain-microvascular endothelial cells (HBMECs) with regulation of human astrocyte (HAs) and to a proliferated colony of U87MG cells. Based on the average particle diameter, zeta potential, entrapping efficiency of Dox, and grafting efficiency of Apr and Anti-MTf, we found that 40% (w/w) 1,2-dipalmitoyl-sn-glycero-3-phosphocholine in lipids were appropriate for fabricating Apr-Anti-MTf-Dox-SLNs. In addition, Apr-Anti-MTf-Dox-SLNs could prevent Dox from fast dissolution and did not induce a serious cytotoxicity to HBMECs and HAs when compared with free Dox. Moreover, the treatments with Apr-Anti-MTf-Dox-SLNs enhanced the ability of Dox to infuse the BBB and to inhibit the growth of GBM. The current Apr-Anti-MTf-Dox-SLNs can be a promising pharmacotherapeutic preparation to penetrate the BBB for malignant brain tumor treatment.Keywords: solid lipid nanoparticle, glioblastoma multiforme, blood–brain barrier, doxorubicin
Procedia PDF Downloads 3611622 Improvements in Double Q-Learning for Anomalous Radiation Source Searching
Authors: Bo-Bin Xiaoa, Chia-Yi Liua
Abstract:
In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.Keywords: double Q learning, dueling network, NoisyNet, source searching
Procedia PDF Downloads 1121621 Lockit: A Logic Locking Automation Software
Authors: Nemanja Kajtez, Yue Zhan, Basel Halak
Abstract:
The significant rise in the cost of manufacturing of nanoscale integrated circuits (IC) has led the majority of IC design companies to outsource the fabrication of their products to other companies, often located in different countries. This multinational nature of the hardware supply chain has led to a host of security threats, including IP piracy, IC overproduction, and Trojan insertion. To combat that, researchers have proposed logic locking techniques to protect the intellectual properties of the design and increase the difficulty of malicious modification of its functionality. However, the adoption of logic locking approaches is rather slow due to the lack of the integration with IC production process and the lack of efficacy of existing algorithms. This work automates the logic locking process by developing software using Python that performs the locking on a gate-level netlist and can be integrated with the existing digital synthesis tools. Analysis of the latest logic locking algorithms has demonstrated that the SFLL-HD algorithm is one of the most secure and versatile in trading-off levels of protection against different types of attacks and was thus selected for implementation. The presented tool can also be expanded to incorporate the latest locking mechanisms to keep up with the fast-paced development in this field. The paper also presents a case study to demonstrate the functionality of the tool and how it could be used to explore the design space and compare different locking solutions. The source code of this tool is available freely from (https://www.researchgate.net/publication/353195333_Source_Code_for_The_Lockit_Tool).Keywords: design automation, hardware security, IP piracy, logic locking
Procedia PDF Downloads 1821620 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 1881619 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams
Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina
Abstract:
It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.Keywords: algorithm, radiology, tuberculosis, x-rays exam
Procedia PDF Downloads 4181618 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 701617 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 791616 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1891615 An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions
Authors: Alireza Gholami, Amir H. D. Markazi
Abstract:
In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.Keywords: adaptive algorithm, fuzzy systems, membership functions, observer
Procedia PDF Downloads 2061614 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 1491613 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 3281612 Distributed Real-time Framework for Experimental Multi Aerial Robotic Systems
Authors: Samuel Knox, Verdon Crann, Peyman Amiri, William Crowther
Abstract:
There exists a shortage of open-source firmware for allowing researchers to focus on implementing high-level planning and control strategies for multi aerial robotic systems in simulation and experiment. Within this body of work, practical firmware is presented, which performs all supplementary tasks, including communications, pre and post-experiment procedures, and emergency safety measures. This allows researchers to implement high-level planning and control algorithms for path planning, traffic management, flight formation and swarming of aerial robots. The framework is built in Python using the MAVSDK library, which is compatible with flight controllers running PX4 firmware and onboard computers based on Linux. Communication is performed using Wi-Fi and the MQTT protocol, currently implemented using a centralized broker. Finally, a graphical user interface (GUI) has been developed to send general commands and monitor the agents. This framework enables researchers to prepare customized planning and control algorithms in a modular manner. Studies can be performed experimentally and in simulation using PX4 software in the loop (SITL) and the Gazebo simulator. An example experimental use case of the framework is presented using novel distributed planning and control strategies. The demonstration is performed using off-the-shelf components and minimal setup.Keywords: aerial robotics, distributed framework, experimental, planning and control
Procedia PDF Downloads 1121611 Value Chain Identification of Beekeeping Business in Indonesia: Case Study of Four Beekeeping Business in West Java
Authors: Dwi Purnomo, Anas Bunyamin, Fajar Susilo, Akbar Anugrah
Abstract:
Beekeeping became a rural economic buffer, especially for people who lived by forest side to diversify their food or sell the honey and bee colony. Aside from the high price of honey and it’s derivative products, there is another revenue stream along beekeeping value chain that could be optimized by the people. There are five of nine honey bee species in the world, exist in Indonesia, such as Apis Cerana, Apis Dorsata, Apis Andreniformis, Apis Koschevnikovi, and Apis Nigrocincta. Indonesian farmer generally developed Apis Cerana and two other honey bees species, like Apis Mellifera and Trigona. This study tried to identify, how beekeeping business practices, challenges and opportunities in four beekeeping business in West Java through the value chain along the business. Data carried out by literature review, interview and focus group discussion with key actors in beekeeping business. There are six revenue stream in beekeeping business in West Java, such as brood hunter, beehives maker, agroforestry, agro-tourism, honey and derivatives products and bee acupuncture. This assesses conclude any criteria that should grasp for developing and sustaining beekeeping business in West Java.Keywords: beekeeping business, business developing, value chain, West Java
Procedia PDF Downloads 2091610 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 3001609 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring
Authors: A. Degale Desta, Cheng Jian
Abstract:
Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning
Procedia PDF Downloads 1611608 Evaluation of Negative Air Ions in Bioaerosol Removal: Indoor Concentration of Airborne Bacterial and Fungal in Residential Building in Qom City, Iran
Authors: Z. Asadgol, A. Nadali, H. Arfaeinia, M. Khalifeh Gholi, R. Fateh, M. Fahiminia
Abstract:
The present investigation was conducted to detect the type and concentrations of bacterial and fungal bioaerosols in one room (bedroom) of each selected residential building located in different regions of Qom during February 2015 (n=9) to July 2016 (n=11). Moreover, we evaluated the efficiency of negative air ions (NAIs) in bioaerosol reduction in indoor air in residential buildings. In the first step, the mean concentrations of bacterial and fungal in nine sampling sites evaluated in winter were 744 and 579 colony forming units (CFU)/m3, while these values were 1628.6 and 231 CFU/m3 in the 11 sampling sites evaluated in summer, respectively. The most predominant genera between bacterial and fungal in all sampling sites were detected as Micrococcus spp. and Staphylococcus spp. and also, Aspergillus spp. and Penicillium spp., respectively. The 95% and 45% of sampling sites have bacterial and fungal concentrations over the recommended levels, respectively. In the removal step, we achieved a reduction with a range of 38% to 93% for bacterial genera and 25% to 100% for fungal genera by using NAIs. The results suggested that NAI is a highly effective, simple and efficient technique in reducing the bacterial and fungal concentration in the indoor air of residential buildings.Keywords: bacterial, fungal, negative air ions (NAIs), indoor air, Iran
Procedia PDF Downloads 4021607 Artificial Intelligence for Generative Modelling
Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta
Abstract:
As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques
Procedia PDF Downloads 1491606 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model
Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma
Procedia PDF Downloads 811605 Improved Classification Procedure for Imbalanced and Overlapped Situations
Authors: Hankyu Lee, Seoung Bum Kim
Abstract:
The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.Keywords: classification, imbalanced data with class overlap, split data space, support vector machine
Procedia PDF Downloads 3081604 Automated Feature Detection and Matching Algorithms for Breast IR Sequence Images
Authors: Chia-Yen Lee, Hao-Jen Wang, Jhih-Hao Lai
Abstract:
In recent years, infrared (IR) imaging has been considered as a potential tool to assess the efficacy of chemotherapy and early detection of breast cancer. Regions of tumor growth with high metabolic rate and angiogenesis phenomenon lead to the high temperatures. Observation of differences between the heat maps in long term is useful to help assess the growth of breast cancer cells and detect breast cancer earlier, wherein the multi-time infrared image alignment technology is a necessary step. Representative feature points detection and matching are essential steps toward the good performance of image registration and quantitative analysis. However, there is no clear boundary on the infrared images and the subject's posture are different for each shot. It cannot adhesive markers on a body surface for a very long period, and it is hard to find anatomic fiducial markers on a body surface. In other words, it’s difficult to detect and match features in an IR sequence images. In this study, automated feature detection and matching algorithms with two type of automatic feature points (i.e., vascular branch points and modified Harris corner) are developed respectively. The preliminary results show that the proposed method could identify the representative feature points on the IR breast images successfully of 98% accuracy and the matching results of 93% accuracy.Keywords: Harris corner, infrared image, feature detection, registration, matching
Procedia PDF Downloads 3041603 A Supervised Approach for Detection of Singleton Spam Reviews
Authors: Atefeh Heydari, Mohammadali Tavakoli, Naomie Salim
Abstract:
In recent years, we have witnessed that online reviews are the most important source of customers’ opinion. They are progressively more used by individuals and organisations to make purchase and business decisions. Unfortunately, for the reason of profit or fame, frauds produce deceptive reviews to hoodwink potential customers. Their activities mislead not only potential customers to make appropriate purchasing decisions and organisations to reshape their business, but also opinion mining techniques by preventing them from reaching accurate results. Spam reviews could be divided into two main groups, i.e. multiple and singleton spam reviews. Detecting a singleton spam review that is the only review written by a user ID is extremely challenging due to lack of clue for detection purposes. Singleton spam reviews are very harmful and various features and proofs used in multiple spam reviews detection are not applicable in this case. Current research aims to propose a novel supervised technique to detect singleton spam reviews. To achieve this, various features are proposed in this study and are to be combined with the most appropriate features extracted from literature and employed in a classifier. In order to compare the performance of different classifiers, SVM and naive Bayes classification algorithms were used for model building. The results revealed that SVM was more accurate than naive Bayes and our proposed technique is capable to detect singleton spam reviews effectively.Keywords: classification algorithms, Naïve Bayes, opinion review spam detection, singleton review spam detection, support vector machine
Procedia PDF Downloads 3091602 Commuters Trip Purpose Decision Tree Based Model of Makurdi Metropolis, Nigeria and Strategic Digital City Project
Authors: Emmanuel Okechukwu Nwafor, Folake Olubunmi Akintayo, Denis Alcides Rezende
Abstract:
Decision tree models are versatile and interpretable machine learning algorithms widely used for both classification and regression tasks, which can be related to cities, whether physical or digital. The aim of this research is to assess how well decision tree algorithms can predict trip purposes in Makurdi, Nigeria, while also exploring their connection to the strategic digital city initiative. The research methodology involves formalizing household demographic and trips information datasets obtained from extensive survey process. Modelling and Prediction were achieved using Python Programming Language and the evaluation metrics like R-squared and mean absolute error were used to assess the decision tree algorithm's performance. The results indicate that the model performed well, with accuracies of 84% and 68%, and low MAE values of 0.188 and 0.314, on training and validation data, respectively. This suggests the model can be relied upon for future prediction. The conclusion reiterates that This model will assist decision-makers, including urban planners, transportation engineers, government officials, and commuters, in making informed decisions on transportation planning and management within the framework of a strategic digital city. Its application will enhance the efficiency, sustainability, and overall quality of transportation services in Makurdi, Nigeria.Keywords: decision tree algorithm, trip purpose, intelligent transport, strategic digital city, travel pattern, sustainable transport
Procedia PDF Downloads 201601 Sex Differentiation of Elm Nymphalid (Nymphalis polychloros Linnaeus, 1758) on Pupal Stage
Authors: Hanife Genç
Abstract:
This study was conducted to determine sex differentiation of laboratory reared Elm nymphalid (Nymphalis polychloros Linnaeus, 1758) by examining the morphological structure of pupal stage. Laboratory colony of elm nymphalid, reared on pear leaves, were used to set up experiments. It was performed with 5 replications having 8 pupae for each replication. Dorsal, ventral and lateral parts of external morphological structures of pupae were examined by Olympus SZX9 microscope and photographed. When fully grown, mature larvae wander the highest part of the rearing cage and pupae were formed hanging by cremaster. After completing prepupa stage about 1.5±0.3 days, they all pupated. Pupal stage was completed at 25±1°C about 4.38±1.20 days. Pupal weights were 0.483±0.05 g in females and 0.392±0.08 g (n=40) in males respectively. Pupal emergence rate was 95%, with 22 females and 16 males. Examinations of ventral parts of 8th, 9th, and 10th abdominal segments revealed that anal opening were found at 10th abdominal segment in both sexes, 3 lumbs were determined at 9th abdominal segments then the specific opening structure at 8th segment was only found on female pupae.Keywords: sex differentiation, Nymphalis polychloros, pupa, Linnaeus
Procedia PDF Downloads 235