Search results for: ant colony algorithms
1510 Recent Advances in Data Warehouse
Authors: Fahad Hanash Alzahrani
Abstract:
This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing
Procedia PDF Downloads 4041509 Refining Scheme Using Amphibious Epistemologies
Authors: David Blaine, George Raschbaum
Abstract:
The evaluation of DHCP has synthesized SCSI disks, and current trends suggest that the exploration of e-business that would allow for further study into robots will soon emerge. Given the current status of embedded algorithms, hackers worldwide obviously desire the exploration of replication, which embodies the confusing principles of programming languages. In our research we concentrate our efforts on arguing that erasure coding can be made "fuzzy", encrypted, and game-theoretic.Keywords: SCHI disks, robot, algorithm, hacking, programming language
Procedia PDF Downloads 4261508 Microbial Load of Fecal Material of Broiler Birds Administered with Lagenaria Breviflora Extract
Authors: Adeleye O. O., T. M. Obuotor, A. O. Kolawole, I. O. Opowoye, M. I. Olasoju, L. T. Egbeyale, R. A. Ajadi
Abstract:
This study investigated the effect of Lagenaria breviflora on broiler poultry birds, including its effect on the microbial count of the poultry droppings. A total of 240-day-old broiler chicks were randomly assigned to six groups, with four replicates per group. The first group was the control, while the other four groups were fed water containing 300g/L and 500g/L concentrations of Lagenaria breviflora twice and thrice daily. The microbial load was determined using the plate count method. The results showed that the administration of Lagenaria breviflora in the water of broiler birds significantly improved their growth performance with an average weight gain range of 1.845g - 2.241g. Mortality rate was at 0%. The study also found that Lagenaria breviflora had a significant effect on the microbial count of the poultry droppings with colony count values from 3.5 x 10-7 - 9.9 x10-7CFU/ml, The total coliforms (Escherichia coli, and Salmonella sp.) was obtained as 1 x 10 -5CFU/ml. The reduction in microbial counts of the poultry droppings could be attributed to the antimicrobial properties of Lagenaria breviflora, which contain phytochemicals reported to possess antimicrobial activity. Therefore, the inclusion of Lagenaria breviflora in the diets of broiler poultry could be an effective strategy for improving growth performance and immune function and reducing the microbial load of poultry droppings, which can help to mitigate the risk of disease transmission to humans and other animals.Keywords: gut microbes, bacterial count, lagenaria breviflora, coliforms
Procedia PDF Downloads 961507 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1231506 Sustainable Drinking Water Treatment Method Using Solar Light
Authors: Ayushi Arora
Abstract:
Solar photocatalysis has the potential to treat drinking water in a sustainable and cost effective manner. According to WHO, there should not be any colony forming units (CFU) per 100 mL present in drinking water, and as per the Central Pollution Control Board (CPCB) of India, the bathing water should have less than 500 CFU/100 mL and the maximum permissible limit is 2500 CFU/100 mL. In this study, 8 water sources near our collaborators, Indian Institute of Technology, Kharagpur, India, were analysed, and it was found that 6 out of 8 sources of water had significant coliform count in them. Two of them were chosen to be treated by solar photocatalysis a) well water which had a count of 4800 CFU/100 mL for total coliforms and was used by people for drinking purposes, and b) pond water which had a count of 92000 CFU/100 mL for total coliforms and 3000 CFU/mL for E.Coli and was used by people for washing and bathing purposes. In this study, a semiconductor-semiconductor, composite BTO-TiO2-RMSG & TiO2-SiO2 were tested for their ability to be activated under solar light and to reduce Total Coliforms and E.Coli bacteria in real world contaminated water, and it was found that both catalysts were both able to reduce the total coliform count in water by 99.7% and 98.2 % in 2 hrs respectively. They have also shown promising results in reusability tests. This study demonstrates the ability of solar photocatalysis to be used in real world drinking water treatment and will promote future advancements in this field.Keywords: sustainable water treatment, waterpurification technologies, water policies, water pollution and environmental engineering
Procedia PDF Downloads 801505 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 1611504 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 1691503 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data
Authors: Martin Pellon Consunji
Abstract:
Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms
Procedia PDF Downloads 1231502 Effect of Climate Variability on Honeybee's Production in Ondo State, Nigeria
Authors: Justin Orimisan Ijigbade
Abstract:
The study was conducted to assess the effect of climate variability on honeybee’s production in Ondo State, Nigeria. Multistage sampling technique was employed to collect the data from 60 beekeepers across six Local Government Areas in Ondo State. Data collected were subjected to descriptive statistics and multiple regression model analyses. The results showed that 93.33% of the respondents were male with 80% above 40 years of age. Majority of the respondents (96.67%) had formal education and 90% produced honey for commercial purpose. The result revealed that 90% of the respondents admitted that low temperature as a result of long hours/period of rainfall affected the foraging efficiency of the worker bees, 73.33% claimed that long period of low humidity resulted in low level of nectar flow, while 70% submitted that high temperature resulted in improper composition of workers, dunes and queen in the hive colony. The result of multiple regression showed that beekeepers’ experience, educational level, access to climate information, temperature and rainfall were the main factors affecting honey bees production in the study area. Therefore, beekeepers should be given more education on climate variability and its adaptive strategies towards ensuring better honeybees production in the study area.Keywords: climate variability, honeybees production, humidity, rainfall and temperature
Procedia PDF Downloads 2721501 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 701500 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 371499 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 2671498 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 1531497 Hsa-miR-329 Functions as a Tumor Suppressor through Targeting MET in Non-Small Cell Lung Cancer
Authors: Cheng-Cao Sun, Shu-Jun Li, Cuili Yang, Yongyong Xi, Liang Wang, Feng Zhang, De-Jia Li
Abstract:
MicroRNAs (miRNAs) act as key regulators of multiple cancers. Hsa-miR-329 (miR-329) functions as a tumor suppressor in some malignancies. However, its role on lung cancer remains poorly understood. In this study, we investigated the role of miR-329 on the development of lung cancer. The results indicated that miR-329 was decreased in primary lung cancer tissues compared with matched adjacent normal lung tissues and very low levels were found in a non-small cell lung cancer (NSCLC) cell lines. Ectopic expression of miR-329 in lung cancer cell lines substantially repressed cell growth as evidenced by cell viability assay, colony formation assay and BrdU staining, through inhibiting cyclin D1, cyclin D2, and up-regulatiing p57(Kip2) and p21(WAF1/CIP1). In addition, miR-329 promoted NSCLC cell apoptosis, as indicated by up-regulation of key apoptosis gene cleaved caspase-3, and down-regulation of anti-apoptosis gene Bcl2. Moreover, miR-329 inhibited cellular migration and invasiveness through inhibiting matrix metalloproteinases (MMP)-7 and MMP-9. Further, oncogene MET was revealed to be a putative target of miR-329, which was inversely correlated with miR-329 expression. Furthermore, down-regulation of MET by siRNA performed similar effects to over-expression of miR-329. Collectively, our results demonstrated that miR-329 played a pivotal role in lung cancer through inhibiting cell proliferation, migration, invasion, and promoting apoptosis by targeting oncogenic MET.Keywords: hsa-miRNA-329(miR-329), MET, non-small cell lung cancer (NSCLC), proliferation, apoptosis
Procedia PDF Downloads 4091496 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 121495 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 1221494 Effect of Lactic Acid Bacteria Inoculant on Fermentation Quality of Sweet Sorghum Silage
Authors: Azizza Mala, Babo Fadlalla, Elnour Mohamed, Siran Wang, Junfeng Li, Tao Shao
Abstract:
Sweet sorghum is considered one of the best plants for silage production and is now a more important feed crop in many countries worldwide. It is simple to ensile because of its high water-soluble carbohydrates (WSC) concentration and low buffer capacity. This study investigated the effect of adding Pediococcus acidilactici AZZ5 and Lactobacillus plantarum AZZ4 isolated from elephant grass on the fermentation quality of sweet sorghum silage. One commercial bacteria Lactobacillus Plantarum, Ecosyl MTD/1(C.B.)), and two strains were used as additives Pediococcus acidilactici (AZZ5), Lactobacillus plantarum subsp. Plantarum (AZZ4) at 6 log colony forming units (cfu)/g of fresh sweet sorghum grass in laboratory silos (1000g). After 15, 30, and 60 days, the silos for each treatment were opened. All of the isolated strains enhanced the silage quality of sweet sorghum silage compared to the control, as evidenced by significantly (P < 0.05) lower ammonia nitrogen (NH3-N) content and undesirable microbial counts, as well as greater lactic acid (L.A.) contents and lactic acid/acetic acid (LA/AA) ratios. In addition, AZZ4 performed better than all other inoculants during ensiling, as evidenced by a significant (P < 0.05) reduction in pH and ammonia-N contents and a significant increase in lactic acid contents.Keywords: fermentation, lactobacillus plantarum, lactic acid bacteria, pediococcus acidilactic, sweet sorghum
Procedia PDF Downloads 911493 The Effect of Probiotics Lactococcus plantarum and Prebiotic Purple Sweet Potato (Ipomoea batatas sp.) on Performance and Cholesterol Meat of Local Ducks
Authors: Husmaini, Rijal Zein, Zulkarnain, Marlito Latifa, Syahrul E. Rambee
Abstract:
The present study was conducted to evaluate the effects of probiotics–fermented purple sweet potato (PPSP) on performance and cholesterol meat of local ducks. One hundred two weeks old male local ducks placed in 4 treatment doses for ten weeks. The treatments were the dosage of PPSP, i.e., 0, 1, 2 and 3 grams of PPSP/bird/week. One gram PPSP contains 1.3 x 108 colony form unit. Data were analyzed statistically using SPSS and DMRT. The results showed that PPSP administration in local ducks did not affect intestinal villi height and fed consumption (P > 0.05), but highly significant (P < 0.01) increasing duodenum thickness, body weight, carcass yield and reducing both feed conversion and cholesterol meat content. The difference in PPSP dosage (1.2 and 3 grams) had the same effect on body weight gain. However, it has a different impact on feed conversion and meat cholesterol levels. The higher the PPSP dose given, the lower the feed conversion and meat cholesterol level. This study has shown that administration of PPSP can improve performance and reduce cholesterol levels of local duck meat. Giving PPSP as much as 3 grams per bird every week has provided the best results.Keywords: cholesterol, local duck, performance, probiotics, purple sweet potato
Procedia PDF Downloads 1811492 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 751491 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements
Procedia PDF Downloads 661490 Effect of Probiotic and Prebiotic on Performance, Some Blood Parameters, and Intestine Morphology of Laying Hens
Authors: A. Zarei, M. Porkhalili, B. Gholamhosseini
Abstract:
In this experiment, sixty Hy-Line (W-36) laying hens were selected in 40weeks of age. Experimental diets were consumed for 12 weeks duration by them. The experimental design was completely randomized block included four treatments and each of them with five replications and three sample in each replicate. Treatments were as follow: Basal diet+probiotic, basal diet + prebiotic and basal diet+probiotic+ prebiotic. Performance traits were measured such as: hen production, egg weight, feed intake, feed conversion ratio ,shell thickness, shell strength, shell weight, hough unit, yolk color, and yolk cholesterol. Blood parameters like; Ca, cholesterol, triglyceride, VLDL and antibody titer and so morphological of intestine were determined. At the end of experimental period, after sampling from end of cecum, bacterial colony count was measured. Results showed; shell weight was significantly greater than other treatments in probiotic treatment.Yolk weight in prebiotic treatment was significantly greater than other treatments. The ratio of height of villi to dept of crypt cells in duodenum, jejunum, ileum and secum in prebiotic treatment were significantly greater. Results from the other traits were not significant between treatments, however there were totally good results in other traits with simultaneous usage of probiotic and prebiotic.Keywords: probiotic, prebiotic, laying hens, performance, blood parameters, intestine morphology
Procedia PDF Downloads 3221489 Ameliorating Effects of Silver Nanoparticles Synthesized Using Chlorophytum borivillianum against Gamma Radiation Induced Oxidative Stress in Testis of Swiss Albino Mice
Authors: Ruchi Vyas, Sanjay Singh, Rashmi Sisodia
Abstract:
Chlorophytum borivillianum root extract (CBE) was chosen as a reducing agent to fabricate silver nanoparticles with the aim of studying its radioprotective efficacy. The formation of synthesized nanoparticles was characterized by UV–visible analysis (UV–vis), Fourier transform infra-red (FT-IR), Transmission electron microscopy (TEM), Scanning electron microscope (SEM). TEM analysis showed particles size in the range of 20-30 nm. For this study, Swiss albino mice were selected from inbred colony and were divided into 4 groups: group I- control (irradiated-6 Gy), group II- normal (vehicle treated), group III- plant extract alone and group IV- CB-AgNPs (dose of 50 mg/kg body wt./day) administered orally for 7 consecutive days before irradiation to serve as experimental. CB-AgNPs pretreatment rendered significant increase in body weight and testes weight at various post irradiation intervals in comparison to irradiated group. Supplementation of CB-AgNPs reversed the adverse effects of gamma radiation on biochemical parameters as it notably ameliorated the elevation in lipid peroxidation and decline in glutathione concentration in testes. These observations indicate the radio-protective potential of CB-AgNPs in testicular constituents against gamma irradiation in mice.Keywords: Chlorophytum borivillianum, gamma radiation, radioprotective, silver nanoparticles
Procedia PDF Downloads 1481488 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 1301487 Control of a Quadcopter Using Genetic Algorithm Methods
Authors: Mostafa Mjahed
Abstract:
This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system
Procedia PDF Downloads 4311486 Investigation of Rifampicin and Isoniazid Resistance Mutated Genes in Mycobacterium Tuberculosis Isolated From Patients
Authors: Seyyed Mohammad Amin Mousavi Sagharchi, Alireza Mahmoudi Nasab, Tim Bakker
Abstract:
Introduction: Mycobacterium tuberculosis (MTB) is the most intelligent bacterium that existed in the world to our best knowledge. This bacterium can cause tuberculosis (TB) which is responsible for its spread speed and murder of millions of people around the world. MTB has the practical function to escape from anti-tuberculosis drugs (AT), for this purpose, it handles some mutations in the main genes and creates new patterns for inhibited genes. Method and materials: Researchers have their best tries to safely isolate MTB from the sputum specimens of 35 patients in some hospitals in the Tehran province and detect MTB by culture on Löwenstein-Jensen (LJ) medium and microscopic examination. DNA was extracted from the established bacterial colony by enzymatic extraction method. It was amplified by the polymerase chain reaction (PCR) method, reverse hybridization, and evaluation for detection of resistance genes; generally, researchers apply GenoType MTBDRplus assay. Results: Investigations of results declare us that 21 of the isolated specimens (about 60%) have mutation in rpoB gene, which resisted to rifampicin (most prevalence), and 8 of them (about 22.8%) have mutation in katG or inhA genes which resisted to isoniazid. Also, 4 of them (about 11.4%) don't have any mutation, and 2 of them (about 5.7%) have mutation in every three genes, which makes them resistant to the two drugs mentioned above. Conclusion: Rifampicin and isoniazid are two essential AT that using in the first line of treatment. Resistance in rpoB, and katG, and inhA genes related to mentioned drugs lead to ineffective treatment.Keywords: mycobacterium tuberculosis, tuberculosis, drug resistance, isoniazid, rifampicin
Procedia PDF Downloads 961485 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 3531484 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning
Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi
Abstract:
Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.Keywords: agriculture, computer vision, data science, geospatial technology
Procedia PDF Downloads 1371483 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 181482 An Analysis of Miguel Syjuco’s Ilustrado: The Reconstructed Oriental Image
Authors: Christine Ivy A. Nogot
Abstract:
Under the colony of Spain for more than three centuries, the Philippines has a deep-rooted structure of Western ideologies and colonialism. The late 19th century, the period of Enlightenment, created a significant impact on our history when a group of middle-class Filipino men were sent to Europe to study. They were called Ilustrados, a Spanish word for erudite. They were the enlightened; the well-educated, intellectual scholars. Their writings provide intellectual grounds for the awakening of national consciousness that eventually prompted national movements and revolutions. They helped to establish a postcolonial society. In the modern era, Miguel Syjuco, a Filipino expatriate, wrote a novel and titled it Ilustrado. It is a representation of the liberal mind of the diasporic author in contemporary discourse. It provides a critical examination of the ilustrado in transition through the character of Miguel, who is also an expatriate writer. Using Syjuco’s award-winning novel as the primary text and anchored on Said’s concept of Orientalism, this paper examines how the depiction of features of the Eastern world is presented in the literary discourse. This paper looks into Said’s concept of orientalism as a hegemonic discursive structure and shows how Western superiority influences the Eastern culture in literary discourse. It explores Gramsci’s theory of cultural hegemony to explore Said’s argument that Western powers conquer the orient through culture and ideology. This paper presents how dominant ideologies and the social context redefine the ilustrado in the contemporary era.Keywords: cultural hegemony, ilustrado, orientalism, postcolonial
Procedia PDF Downloads 761481 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation
Authors: Vadim Vagin, Marina Fomina, Oleg Morosin
Abstract:
This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization
Procedia PDF Downloads 442