Search results for: modified simplex algorithm
3945 SEM Image Classification Using CNN Architectures
Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope
Procedia PDF Downloads 1273944 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic
Authors: Farahnaz Karami
Abstract:
Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic
Procedia PDF Downloads 653943 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface
Authors: Suman Dutta, Manish Agrawal, C. S. Jog
Abstract:
Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method
Procedia PDF Downloads 2763942 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs
Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter
Abstract:
The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.Keywords: initial water saturation, permeability, porosity, surfactant EOR
Procedia PDF Downloads 1633941 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1033940 Physical and Microbiological Evaluation of Chitosan Films: Effect of Essential Oils and Storage
Authors: N. Valderrama, W. Albarracín, N. Algecira
Abstract:
It was studied the effect of the inclusion of thyme and rosemary essential oils into chitosan films, as well as the microbiological and physical properties when storing chitosan film with and without the mentioned inclusion. The film forming solution was prepared by dissolving chitosan (2%, w/v), polysorbate 80 (4% w/w CH) and glycerol (16% w/w CH) in aqueous lactic acid solutions (control). The thyme (TEO) and rosemary (REO) essential oils (EOs) were included 1:1 w/w (EOs:CH) on their combination 50/50 (TEO:REO). The films were stored at temperatures of 5, 20, 33°C and a relative humidity of 75% during four weeks. The films with essential oil inclusion did not show an antimicrobial activity against strains. This behavior could be explained because the chitosan only inhibits the growth of microorganisms in direct contact with the active sites. However, the inhibition capacity of TEO was higher than the REO and a synergic effect between TEO:REO was found for S. enteritidis strains in the chitosan solution. Some physical properties were modified by the inclusion of essential oils. The addition of essential oils does not affect the mechanical properties (tensile strength, elongation at break, puncture deformation), the water solubility, the swelling index nor the DSC behavior. However, the essential oil inclusion can significantly decrease the thickness, the moisture content, and the L* value of films whereas the b* value increased due to molecular interactions between the polymeric matrix, the loosing of the structure, and the chemical modifications. On the other hand, the temperature and time of storage changed some physical properties on the chitosan films. This could have occurred because of chemical changes, such as swelling in the presence of high humidity air and the reacetylation of amino groups. In the majority of cases, properties such as moisture content, tensile strength, elongation at break, puncture deformation, a*, b*, chrome, ΔE increased whereas water resistance, swelling index, L*, and hue angle decreased.Keywords: chitosan, food additives, modified films, polymers
Procedia PDF Downloads 3673939 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network
Authors: Li Qingjian, Li Ke, He Chun, Huang Yong
Abstract:
In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.Keywords: DBN, SOM, pattern classification, hyperspectral, data compression
Procedia PDF Downloads 3413938 Consequences of Some Remediative Techniques Used in Sewaged Soil Bioremediation on Indigenous Microbial Activity
Authors: E. M. Hoballah, M. Saber, A. Turky, N. Awad, A. M. Zaghloul
Abstract:
Remediation of cultivated sewage soils in Egypt become an important aspect in last decade for having healthy crops and saving the human health. In this respect, a greenhouse experiment was conducted where contaminated sewage soil was treated with modified forms of 2% bentonite (T1), 2% kaolinite (T2), 1% bentonite+1% kaolinite (T3), 2% probentonite (T4), 2% prokaolinite (T5), 1% bentonite + 0.5% kaolinite + 0.5% rock phosphate (RP) (T6), 2% iron oxide (T7) and 1% iron oxide + 1% RP (T8). These materials were applied as remediative materials. Untreated soil was also used as a control. All soil samples were incubated for 2 months at 25°C at field capacity throughout the whole experiment. Carbon dioxide (CO2) efflux from both treated and untreated soils as a biomass indicator was measured through the incubation time and kinetic parameters of the best fitted models used to describe the phenomena were taken to evaluate the succession of sewaged soils remediation. The obtained results indicated that according to the kinetic parameters of used models, CO2 effluxes from remediated soils was significantly decreased compared to control treatment with variation in rate values according to type of remediation material applied. In addition, analyzed microbial biomass parameter showed that Ni and Zn were the most potential toxic elements (PTEs) that influenced the decreasing order of microbial activity in untreated soil. Meanwhile, Ni was the only influenced pollutant in treated soils. Although all applied materials significantly decreased the hazards of PTEs in treated soil, modified bentonite was the best treatment compared to other used materials. This work discussed different mechanisms taking place between applied materials and PTEs founded in the studied sewage soil.Keywords: remediation, potential toxic elements, soil biomass, sewage
Procedia PDF Downloads 2283937 Sonodynamic Activity of Porphyrins-SWCNT
Authors: F. Bosca, F. Foglietta, F. Turci, E. Calcio Gaudino, S. Mana, F. Dosio, R. Canaparo, L. Serpe, A. Barge
Abstract:
In recent years, medical science has improved chemotherapy, radiation therapy and adjuvant therapy and has developed newer targeted therapies as well as refining surgical techniques for removing cancer. However, the chances of surviving the disease depend greatly on the type and location of the cancer and the extent of the disease at the start of treatment. Moreover, mainstream forms of cancer treatment have side effects which range from the unpleasant to the fatal. Therefore, the continuation of progress in anti-cancer therapy may depend on placing emphasis on other existing but less thoroughly investigated therapeutic approaches such as Sonodynamic Therapy (SDT). SDT is based on the local activation of a so called 'sonosensitizer', a molecule able to be excited by ultrasound, the radical production as a consequence of its relaxation processes and cell death due to different mechanisms induced by radical production. The present work deals with synthesis, characterization and preliminary in vitro test of Single Walled Carbon Nanotubes (SWCNT) decorated with porphyrins and biological vectors. The SWCNT’s surface was modified exploiting 1, 3-dipolar cycloaddition or Dies Alder reactions. For this purpose, different porphyrins scaffolds were ad-hoc synthesized using also non-conventional techniques. To increase cellular specificity of porphyrin-conjugated SWCNTs and to improve their ability to be suspended in aqueous solution, the modified nano-tubes were grafted with suitable glutamine or hyaluronic acid derivatives. These nano-sized sonosensitizers were characterized by several methodologies and tested in vitro on different cancer cell lines.Keywords: sonodynamic therapy, porphyrins synthesis and modification, SWNCT grafting, hyaluronic acid, anti-cancer treatment
Procedia PDF Downloads 3903936 Semi-Supervised Hierarchical Clustering Given a Reference Tree of Labeled Documents
Authors: Ying Zhao, Xingyan Bin
Abstract:
Semi-supervised clustering algorithms have been shown effective to improve clustering process with even limited supervision. However, semi-supervised hierarchical clustering remains challenging due to the complexities of expressing constraints for agglomerative clustering algorithms. This paper proposes novel semi-supervised agglomerative clustering algorithms to build a hierarchy based on a known reference tree. We prove that by enforcing distance constraints defined by a reference tree during the process of hierarchical clustering, the resultant tree is guaranteed to be consistent with the reference tree. We also propose a framework that allows the hierarchical tree generation be aware of levels of levels of the agglomerative tree under creation, so that metric weights can be learned and adopted at each level in a recursive fashion. The experimental evaluation shows that the additional cost of our contraint-based semi-supervised hierarchical clustering algorithm (HAC) is negligible, and our combined semi-supervised HAC algorithm outperforms the state-of-the-art algorithms on real-world datasets. The experiments also show that our proposed methods can improve clustering performance even with a small number of unevenly distributed labeled data.Keywords: semi-supervised clustering, hierarchical agglomerative clustering, reference trees, distance constraints
Procedia PDF Downloads 5483935 A Fuzzy Multiobjective Model for Bed Allocation Optimized by Artificial Bee Colony Algorithm
Authors: Jalal Abdulkareem Sultan, Abdulhakeem Luqman Hasan
Abstract:
With the development of health care systems competition, hospitals face more and more pressures. Meanwhile, resource allocation has a vital effect on achieving competitive advantages in hospitals. Selecting the appropriate number of beds is one of the most important sections in hospital management. However, in real situation, bed allocation selection is a multiple objective problem about different items with vagueness and randomness of the data. It is very complex. Hence, research about bed allocation problem is relatively scarce under considering multiple departments, nursing hours, and stochastic information about arrival and service of patients. In this paper, we develop a fuzzy multiobjective bed allocation model for overcoming uncertainty and multiple departments. Fuzzy objectives and weights are simultaneously applied to help the managers to select the suitable beds about different departments. The proposed model is solved by using Artificial Bee Colony (ABC), which is a very effective algorithm. The paper describes an application of the model, dealing with a public hospital in Iraq. The results related that fuzzy multi-objective model was presented suitable framework for bed allocation and optimum use.Keywords: bed allocation problem, fuzzy logic, artificial bee colony, multi-objective optimization
Procedia PDF Downloads 3283934 Design and Field Programmable Gate Array Implementation of Radio Frequency Identification for Boosting up Tag Data Processing
Authors: G. Rajeshwari, V. D. M. Jabez Daniel
Abstract:
Radio Frequency Identification systems are used for automated identification in various applications such as automobiles, health care and security. It is also called as the automated data collection technology. RFID readers are placed in any area to scan large number of tags to cover a wide distance. The placement of the RFID elements may result in several types of collisions. A major challenge in RFID system is collision avoidance. In the previous works the collision was avoided by using algorithms such as ALOHA and tree algorithm. This work proposes collision reduction and increased throughput through reading enhancement method with tree algorithm. The reading enhancement is done by improving interrogation procedure and increasing the data handling capacity of RFID reader with parallel processing. The work is simulated using Xilinx ISE 14.5 verilog language. By implementing this in the RFID system, we can able to achieve high throughput and avoid collision in the reader at a same instant of time. The overall system efficiency will be increased by implementing this.Keywords: antenna, anti-collision protocols, data management system, reader, reading enhancement, tag
Procedia PDF Downloads 3063933 Optimisation of Intermodal Transport Chain of Supermarkets on Isle of Wight, UK
Authors: Jingya Liu, Yue Wu, Jiabin Luo
Abstract:
This work investigates an intermodal transportation system for delivering goods from a Regional Distribution Centre to supermarkets on the Isle of Wight (IOW) via the port of Southampton or Portsmouth in the UK. We consider this integrated logistics chain as a 3-echelon transportation system. In such a system, there are two types of transport methods used to deliver goods across the Solent Channel: one is accompanied transport, which is used by most supermarkets on the IOW, such as Spar, Lidl and Co-operative food; the other is unaccompanied transport, which is used by Aldi. Five transport scenarios are studied based on different transport modes and ferry routes. The aim is to determine an optimal delivery plan for supermarkets of different business scales on IOW, in order to minimise the total running cost, fuel consumptions and carbon emissions. The problem is modelled as a vehicle routing problem with time windows and solved by genetic algorithm. The computing results suggested that accompanied transport is more cost efficient for small and medium business-scale supermarket chains on IOW, while unaccompanied transport has the potential to improve the efficiency and effectiveness of large business scale supermarket chains.Keywords: genetic algorithm, intermodal transport system, Isle of Wight, optimization, supermarket
Procedia PDF Downloads 3703932 2D Hexagonal Cellular Automata: The Complexity of Forms
Authors: Vural Erdogan
Abstract:
We created two-dimensional hexagonal cellular automata to obtain complexity by using simple rules same as Conway’s game of life. Considering the game of life rules, Wolfram's works about life-like structures and John von Neumann's self-replication, self-maintenance, self-reproduction problems, we developed 2-states and 3-states hexagonal growing algorithms that reach large populations through random initial states. Unlike the game of life, we used six neighbourhoods cellular automata instead of eight or four neighbourhoods. First simulations explained that whether we are able to obtain sort of oscillators, blinkers, and gliders. Inspired by Wolfram's 1D cellular automata complexity and life-like structures, we simulated 2D synchronous, discrete, deterministic cellular automata to reach life-like forms with 2-states cells. The life-like formations and the oscillators have been explained how they contribute to initiating self-maintenance together with self-reproduction and self-replication. After comparing simulation results, we decided to develop the algorithm for another step. Appending a new state to the same algorithm, which we used for reaching life-like structures, led us to experiment new branching and fractal forms. All these studies tried to demonstrate that complex life forms might come from uncomplicated rules.Keywords: hexagonal cellular automata, self-replication, self-reproduction, self- maintenance
Procedia PDF Downloads 1543931 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison
Authors: Saugata Bose, Ritambhra Korpal
Abstract:
The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram
Procedia PDF Downloads 3593930 The Effects of High Velocity Low Amplitude Thrust Manipulation versus Low Velocity Low Amplitude Mobilization in Treatment of Chronic Mechanical Low Back Pain
Authors: Ahmed R. Z. Baghdadi, Ibrahim M. I. Hamoda, Mona H. Gamal Eldein, Ibrahim Magdy Elnaggar
Abstract:
Background: High-velocity low amplitude thrust (HVLAT) manipulation and low-velocity low amplitude (LVLA) mobilization are an effective treatment for low back pain (LBP). Purpose: This study compared the effects of HVLAT versus LVLA on pain, functional deficits and segmental mobility in treatment of chronic mechanical LBP. Methods: Ninety patients suffering from chronic mechanical LBP are classified to three groups; Thirty patients treated by HVLAT (group I), thirty patients treated by LVLA (group II) and thirty patients as control group (group III) participated in the study. The mean age was 28.00±2.92, 27.83±2.28 and 28.07±3.05 years and BMI 27.98±2.60, 28.80±2.40 and 28.70±2.53 kg/m2 for group I, II and III respectively. The Visual Analogue Scale (VAS), the Oswestry low back pain disability questionnaire and modified schoper test were used for assessment. Assessments were conducted two weeks before and after treatment with the control group being assessed at the same time intervals. The treatment program group one was two weeks single session per week, and for group II two sessions per week for two weeks. Results: The One-way ANOVA revealed that group I had significantly lower pain scores and Oswestry score compared with group II two weeks after treatment. Moreover, the mobility in modified schoper increased significantly and the pain scores and Oswestry scores decreased significantly after treatment in group I and II compared with control group. Interpretation/Conclusion: HVLAT is preferable to LVLA mobilization, possibly due to a beneficial neurophysiological effect by Stimulating mechanically sensitive neurons in the lumbar facet joint capsule.Keywords: low back pain, manipulation, mobilization, low velocity
Procedia PDF Downloads 6043929 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 4493928 Application of Random Forest Model in The Prediction of River Water Quality
Authors: Turuganti Venkateswarlu, Jagadeesh Anmala
Abstract:
Excessive runoffs from various non-point source land uses, and other point sources are rapidly contaminating the water quality of streams in the Upper Green River watershed, Kentucky, USA. It is essential to maintain the stream water quality as the river basin is one of the major freshwater sources in this province. It is also important to understand the water quality parameters (WQPs) quantitatively and qualitatively along with their important features as stream water is sensitive to climatic events and land-use practices. In this paper, a model was developed for predicting one of the significant WQPs, Fecal Coliform (FC) from precipitation, temperature, urban land use factor (ULUF), agricultural land use factor (ALUF), and forest land-use factor (FLUF) using Random Forest (RF) algorithm. The RF model, a novel ensemble learning algorithm, can even find out advanced feature importance characteristics from the given model inputs for different combinations. This model’s outcomes showed a good correlation between FC and climate events and land use factors (R2 = 0.94) and precipitation and temperature are the primary influencing factors for FC.Keywords: water quality, land use factors, random forest, fecal coliform
Procedia PDF Downloads 1983927 Organic Contaminant Degradation Using H₂O₂ Activated Biochar with Enhanced Persistent Free Radicals
Authors: Kalyani Mer
Abstract:
Hydrogen peroxide (H₂O₂) is one of the most efficient and commonly used oxidants in in-situ chemical oxidation (ISCO) of organic contaminants. In the present study, we investigated the activation of H₂O₂ by heavy metal (nickel and lead metal ions) loaded biochar for phenol degradation in an aqueous solution (concentration = 100 mg/L). It was found that H₂O₂ can be effectively activated by biochar, which produces hydroxyl (•OH) radicals owing to an increase in the formation of persistent free radicals (PFRs) on biochar surface. Ultrasound treated (30s duration) biochar, chemically activated by 30% phosphoric acid and functionalized by diethanolamine (DEA) was used for the adsorption of heavy metal ions from aqueous solutions. It was found that modified biochar could remove almost 60% of nickel in eight hours; however, for lead, the removal efficiency reached up to 95% for the same time duration. The heavy metal loaded biochar was further used for the degradation of phenol in the absence and presence of H₂O₂ (20 mM), within 4 hours of reaction time. The removal efficiency values for phenol in the presence of H₂O₂ were 80.3% and 61.9%, respectively, by modified biochar loaded with nickel and lead metal ions. These results suggested that the biochar loaded with nickel exhibits a better removal capacity towards phenol than the lead loaded biochar when used in H₂O₂ based oxidation systems. Meanwhile, control experiments were set in the absence of any activating biochar, and the removal efficiency was found to be 19.1% when only H₂O₂ was added in the reaction solution. Overall, the proposed approach serves a dual purpose of using biochar for heavy metal ion removal and treatment of organic contaminants by further using the metal loaded biochar for H₂O₂ activation in ISCO processes.Keywords: biochar, ultrasound, heavy metals, in-situ chemical oxidation, chemical activation
Procedia PDF Downloads 1373926 Training of Future Computer Science Teachers Based on Machine Learning Methods
Authors: Meruert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova
Abstract:
The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers.Keywords: algorithm, artificial intelligence, education, machine learning
Procedia PDF Downloads 753925 An Anode Based on Modified Silicon Nanostructured for Lithium – Ion Battery Application
Authors: C. Yaddaden, M. Berouaken, L. Talbi, K. Ayouz, M. Ayat, A. Cheriet, F. Boudeffar, A. Manseri, N. Gabouze
Abstract:
Lithium-ion batteries (LIBs) are widely used in various electronic devices due to their high energy density. However, the performance of the anode material in LIBs is crucial for enhancing the battery's overall efficiency. This research focuses on developing a new anode material by modifying silicon nanostructures, specifically porous silicon nanowires (PSiNWs) and porous silicon nanoparticles (NPSiP), with silver nanoparticles (Ag) to improve the performance of LIBs. The aim of this research is to investigate the potential application of PSiNWs/Ag and NPSiP/Ag as anodes in LIBs and evaluate their performance in terms of specific capacity and Coulombic efficiency. The research methodology involves the preparation of PSiNWs and NPSiP using metal-assisted chemical etching and electrochemical etching techniques, respectively. The Ag nanoparticles are introduced onto the nanostructures through electrodissolution of the porous film and ultrasonic treatment. Galvanostatic charge/discharge measurements are conducted between 1 and 0.01 V to evaluate the specific capacity and Coulombic efficiency of both PSiNWs/Ag and NPSiP/Ag electrodes. The specific capacity of the PSiNWs/Ag electrode is approximately 1800 mA h g-1, with a Coulombic efficiency of 98.8% at the first charge/discharge cycle. On the other hand, the NPSiP/Ag electrode exhibits a specific capacity of 2600 mAh g-1. Both electrodes show a slight increase in capacity retention after 80 cycles, attributed to the high porosity and surface area of the nanostructures and the stabilization of the solid electrolyte interphase (SEI). This research highlights the potential of using modified silicon nanostructures as anodes for LIBs, which can pave the way for the development of more efficient lithium-ion batteries.Keywords: porous silicon nanowires, silicon nanoparticles, lithium-ion batteries, galvanostatic charge/discharge
Procedia PDF Downloads 653924 Sinusoidal Roughness Elements in a Square Cavity
Authors: Muhammad Yousaf, Shoaib Usman
Abstract:
Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness
Procedia PDF Downloads 5293923 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 2323922 Intrusion Detection and Prevention System (IDPS) in Cloud Computing Using Anomaly-Based and Signature-Based Detection Techniques
Authors: John Onyima, Ikechukwu Ezepue
Abstract:
Virtualization and cloud computing are among the fast-growing computing innovations in recent times. Organisations all over the world are moving their computing services towards the cloud this is because of its rapid transformation of the organization’s infrastructure and improvement of efficient resource utilization and cost reduction. However, this technology brings new security threats and challenges about safety, reliability and data confidentiality. Evidently, no single security technique can guarantee security or protection against malicious attacks on a cloud computing network hence an integrated model of intrusion detection and prevention system has been proposed. Anomaly-based and signature-based detection techniques will be integrated to enable the network and its host defend themselves with some level of intelligence. The anomaly-base detection was implemented using the local deviation factor graph-based (LDFGB) algorithm while the signature-based detection was implemented using the snort algorithm. Results from this collaborative intrusion detection and prevention techniques show robust and efficient security architecture for cloud computing networks.Keywords: anomaly-based detection, cloud computing, intrusion detection, intrusion prevention, signature-based detection
Procedia PDF Downloads 3083921 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 1313920 University of Sciences and Technology of Oran Mohamed Boudiaf (USTO-MB)
Authors: Patricia Mikchaela D. L. Feliciano, Ciela Kadeshka A. Fuentes, Bea Trixia B. Gales, Ethel Princess A. Gepulango, Martin R. Hernandez, Elina Andrea S. Lantion, Jhoe Cynder P. Legaspi, Peter F. Quilala, Gina C. Castro
Abstract:
Propolis is a resin-like material used by bees to fill large gap holes in the beehive. It has been found to possess anti-inflammatory property, which stimulates hair growth in rats by inducing hair keratinocytes proliferation, causing water retention and preventing damage caused by heat, ultraviolet rays, and other microorganisms without abnormalities in hair follicles. The present study aimed to formulate 10% and 30% Propolis Hair Cream for use in enhancing hair properties. Raw propolis sample was tested for heavy metals using Atomic Absorption Spectroscopy; zinc and chromium were found to be present. Likewise, propolis was extracted in a percolator using 70% ethanol and concentrated under vacuum using a rotary evaporator. The propolis extract was analyzed for total flavonoid content. Compatibility of the propolis extract with excipients was evaluated using Differential Scanning Calorimetry (DSC). No significant changes in organoleptic properties, pH and viscosity of the formulated creams were noted after four weeks of storage at 2-8°C, 30°C, and 40°C. The formulated creams were found to be non-irritating based on the Modified Draize Rabbit Test. In vivo efficacy was evaluated based on thickness and tensile strength of hair grown on previously shaved rat skin. Results show that the formulated 30% propolis-based cream had greater hair enhancing properties than the 10% propolis cream, which had a comparable effect with minoxidil.Keywords: atomic absorption spectroscopy, differential scanning calorimetry (DSC), modified draize rabbit test, propolis
Procedia PDF Downloads 3473919 Analysis of a Damage-Control Target Displacement of Reinforced Concrete Bridge Pier for Seismic Design
Authors: Mohd Ritzman Abdul Karim, Zhaohui Huang
Abstract:
A current focus in seismic engineering practice is the development of seismic design approach that focuses on the performance-based design. Performance-based design aims to design the structures to achieve specified performance based on the damage limit states. This damage limit is more restrictive limit than life safety and needs to be carefully estimated to avoid damage in piers due to failure in transverse reinforcement. In this paper, a different perspective of damage limit states has been explored by integrating two damage control material limit state, concrete and reinforcement by introduced parameters such as expected yield stress of transverse reinforcement where peak tension strain prior to bar buckling is introduced in a recent study. The different perspective of damage limit states with modified yield displacement and the modified plastic-hinge length is used in order to predict damage-control target displacement for reinforced concreate (RC) bridge pier. Three-dimensional (3D) finite element (FE) model has been developed for estimating damage target displacement to validate proposed damage limit states. The result from 3D FE analysis was validated with experimental study found in the literature. The validated model then was applied to predict the damage target displacement for RC bridge pier and to validate the proposed study. The tensile strain on reinforcement and compression on concrete were used to determine the predicted damage target displacement and compared with the proposed study. The result shows that the proposed damage limit states were efficient in predicting damage-control target displacement consistent with FE simulations.Keywords: damage-control target displacement, damage limit states, reinforced concrete bridge pier, yield displacement
Procedia PDF Downloads 1573918 Design an Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier
Authors: Amit Verma, Simranjeet Kaur, Sandeep Kaur
Abstract:
Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.Keywords: test case prioritization, classification, artificial neural networks, TF-IDF
Procedia PDF Downloads 3983917 Performance Analysis of Modified Solar Water Heating System for Climatic Condition of Allahabad, India
Authors: Kirti Tewari, Rahul Dev
Abstract:
Solar water heating is a thermodynamic process of heating water using sunlight with the help of solar water heater. Thus, solar water heater is a device used to harness solar energy. In this paper, a modified solar water heating system (MSWHS) has been proposed over flat plate collector (FPC) and Evacuated tube collector (ETC). The modifications include selection of materials other than glass, and glass wool which are conventionally used for fabricating FPC and ETC. Some modifications in design have also been proposed. Its collector is made of double layer of semi-cylindrical acrylic tubes and fibre reinforced plastic (FRP) insulation base. Water tank is made of double layer of acrylic sheet except base and north wall. FRP is used in base and north wall of the water tank. A concept of equivalent thickness has been utilised for calculating the dimensions of collector plate, acrylic tube and tank. A thermal model for the proposed design of MSWHS is developed and simulation is carried out on MATLAB for the capacity of 200L MSWHS having collector area of 1.6 m2, length of acrylic tubes of 2m at an inclination angle 25° which is taken nearly equal to the latitude of the given location. Latitude of Allahabad is 24.45° N. The results show that the maximum temperature of water in tank and tube has been found to be 71.2°C and 73.3°C at 17:00hr and 16:00hr respectively in March for the climatic data of Allahabad. Theoretical performance analysis has been carried out by varying number of tubes of collector, the tank capacity and climatic data for given months of winter and summer.Keywords: acrylic, fibre reinforced plastic, solar water heating, thermal model, conventional water heaters
Procedia PDF Downloads 3383916 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information
Authors: Babar Khan, Wang Zhijie
Abstract:
Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel
Procedia PDF Downloads 487