Search results for: optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5871

Search results for: optimization algorithm

1071 Energy Management Method in DC Microgrid Based on the Equivalent Hydrogen Consumption Minimum Strategy

Authors: Ying Han, Weirong Chen, Qi Li

Abstract:

An energy management method based on equivalent hydrogen consumption minimum strategy is proposed in this paper aiming at the direct-current (DC) microgrid consisting of photovoltaic cells, fuel cells, energy storage devices, converters and DC loads. The rational allocation of fuel cells and battery devices is achieved by adopting equivalent minimum hydrogen consumption strategy with the full use of power generated by photovoltaic cells. Considering the balance of the battery’s state of charge (SOC), the optimal power of the battery under different SOC conditions is obtained and the reference output power of the fuel cell is calculated. And then a droop control method based on time-varying droop coefficient is proposed to realize the automatic charge and discharge control of the battery, balance the system power and maintain the bus voltage. The proposed control strategy is verified by RT-LAB hardware-in-the-loop simulation platform. The simulation results show that the designed control algorithm can realize the rational allocation of DC micro-grid energy and improve the stability of system.

Keywords: DC microgrid, equivalent minimum hydrogen consumption strategy, energy management, time-varying droop coefficient, droop control

Procedia PDF Downloads 280
1070 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: CO2 capture, water desalination, Response Surface Methodology, bubble column reactor

Procedia PDF Downloads 259
1069 Relation between Electrical Properties and Application of Chitosan Nanocomposites

Authors: Evgen Prokhorov, Gabriel Luna-Barcenas

Abstract:

The polysaccharide chitosan (CS) is an attractive biopolymer for the stabilization of several nanoparticles in acidic aqueous media. This is due in part to the presence of abundant primary NH2 and OH groups which may lead to steric or chemical stabilization. Applications of most CS nanocomposites are based upon the interaction of high surface area nanoparticles (NPs) with different substance. Therefore, agglomeration of NPs leads to decreasing effective surface area such that it may decrease the efficiency of nanocomposites. The aim of this work is to measure nanocomposite’s electrical conductivity phenomena that will allow one to formulate optimal concentrations of conductivity NPs in CS-based nanocomposites. Additionally, by comparing the efficiency of such nanocomposites, one can guide applications in the biomedical (antibacterial properties and tissue regeneration) and sensor fields (detection of copper and nitrate ions in aqueous solutions). It was shown that the best antibacterial (CS-AgNPs, CS-AgNPs-carbon nanotubes) and would healing properties (CS-AuNPs) are observed in nanocomposites with concentrations of NPs near the percolation threshold. In this regard, the best detection limit in potentiometric and impedimetric sensors for detection of copper ions (using CS-AuNPs membrane) and nitrate ions (using CS-clay membrane) in aqueous solutions have been observed for membranes with concentrations of NPs near percolation threshold. It is well known that at the percolation concentration of NPs an abrupt increasing of conductivity is observed due to the presence of physical contacts between NPs; above this concentration, agglomeration of NPs takes place such that a decrease in the effective surface and performance of nanocomposite appear. The obtained relationship between electrical percolation threshold and performance of polymer nanocomposites with conductivity NPs is important for the design and optimization of polymer-based nanocomposites for different applications.

Keywords: chitosan, conductivity nanoparticles, percolation threshold, polymer nanocomposites

Procedia PDF Downloads 186
1068 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: missing values, incomplete data, distance, incomplete diabetes data

Procedia PDF Downloads 188
1067 Parkinson’s Disease Detection Analysis through Machine Learning Approaches

Authors: Muhtasim Shafi Kader, Fizar Ahmed, Annesha Acharjee

Abstract:

Machine learning and data mining are crucial in health care, as well as medical information and detection. Machine learning approaches are now being utilized to improve awareness of a variety of critical health issues, including diabetes detection, neuron cell tumor diagnosis, COVID 19 identification, and so on. Parkinson’s disease is basically a disease for our senior citizens in Bangladesh. Parkinson's Disease indications often seem progressive and get worst with time. People got affected trouble walking and communicating with the condition advances. Patients can also have psychological and social vagaries, nap problems, hopelessness, reminiscence loss, and weariness. Parkinson's disease can happen in both men and women. Though men are affected by the illness at a proportion that is around partial of them are women. In this research, we have to get out the accurate ML algorithm to find out the disease with a predictable dataset and the model of the following machine learning classifiers. Therefore, nine ML classifiers are secondhand to portion study to use machine learning approaches like as follows, Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Machine Classifier, and Gradient Boosting Classifier are used.

Keywords: naive bayes, adaptive boosting, bagging classifier, decision tree classifier, random forest classifier, XBG classifier, k nearest neighbor classifier, support vector classifier, gradient boosting classifier

Procedia PDF Downloads 100
1066 Parametric Optimization of High-Performance Electric Vehicle E-Gear Drive for Radiated Noise Using 1-D System Simulation

Authors: Sanjai Sureshkumar, Sathish G. Kumar, P. V. V. Sathyanarayana

Abstract:

For e-gear drivetrain, the transmission error and the resulting variation in mesh stiffness is one of the main source of excitation in High performance Electric Vehicle. These vibrations are transferred through the shaft to the bearings and then to the e-Gear drive housing eventually radiating noise. A parametrical model developed in 1-D system simulation by optimizing the micro and macro geometry along with bearing properties and oil filtration to achieve least transmission error and high contact ratio. Histogram analysis is performed to condense the actual road load data into condensed duty cycle to find the bearing forces. The structural vibration generated by these forces will be simulated in a nonlinear solver obtaining the normal surface velocity of the housing and the results will be carried forward to Acoustic software wherein a virtual environment of the surrounding (actual testing scenario) with accurate microphone position will be maintained to predict the sound pressure level of radiated noise and directivity plot of the e-Gear Drive. Order analysis will be carried out to find the root cause of the vibration and whine noise. Broadband spectrum will be checked to find the rattle noise source. Further, with the available results, the design will be optimized, and the next loop of simulation will be performed to build a best e-Gear Drive on NVH aspect. Structural analysis will be also carried out to check the robustness of the e-Gear Drive.

Keywords: 1-D system simulation, contact ratio, e-Gear, mesh stiffness, micro and macro geometry, transmission error, radiated noise, NVH

Procedia PDF Downloads 130
1065 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings

Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez

Abstract:

Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.

Keywords: life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil

Procedia PDF Downloads 273
1064 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 170
1063 Finite Volume Method for Flow Prediction Using Unstructured Meshes

Authors: Juhee Lee, Yongjun Lee

Abstract:

In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.

Keywords: finite volume method, fluid flow, laminar flow, unstructured grid

Procedia PDF Downloads 256
1062 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy

Authors: Hao Wang, Shengchun Wang, Weidong Wang

Abstract:

On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.

Keywords: curvature entropy, profile registration, rail wear, structured light, train-running

Procedia PDF Downloads 228
1061 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 93
1060 Formulation and Ex Vivo Evaluation of Solid Lipid Nanoparticles Based Hydrogel for Intranasal Drug Delivery

Authors: Pramod Jagtap, Kisan Jadhav, Neha Dand

Abstract:

Risperidone (RISP) is an antipsychotic agent and has low water solubility and nontargeted delivery results in numerous side effects. Hence, an attempt was made to develop SLNs hydrogel for intranasal delivery of RISP to achieve maximum bioavailability and reduction of side effects. RISP loaded SLNs composed of 1.65% (w/v) lipid mass were produced by high shear homogenization (HSH) coupled ultrasound (US) method using glyceryl monostearate (GMS) or Imwitor 900K (solid lipid). The particles were loaded with 0.2% (w/v) of the RISP & surface-tailored with a 2.02% (w/v) non-ionic surfactant Tween® 80. Optimization was done using 32 factorial design using Design Expert® software. The prepared SLNs dispersion incorporated into Polycarbophil AA1 hydrogel (0.5% w/v). The final gel formulation was evaluated for entrapment efficiency, particle size, rheological properties, X ray diffraction, in vitro diffusion, ex vivo permeation using sheep nasal mucosa and histopathological studies for nasocilliary toxicity. The entrapment efficiency of optimized SLNs was found to be 76 ± 2 %, polydispersity index <0.3., particle size 278 ± 5 nm. This optimized batch was incorporated into hydrogel. The pH was found to be 6.4 ± 0.14. The rheological behaviour of hydrogel formulation revealed no thixotropic behaviour. In histopathology study, there was no nasocilliary toxicity observed in nasal mucosa after ex vivo permeation. X-ray diffraction data shows drug was in amorphous form. Ex vivo permeation study shows controlled release profile of drug.

Keywords: ex vivo, particle size, risperidone, solid lipid nanoparticles

Procedia PDF Downloads 392
1059 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms

Authors: Sagri Sharma

Abstract:

Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.

Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine

Procedia PDF Downloads 401
1058 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 318
1057 Use of Corn Stover for the Production of 2G Bioethanol, Enzymes, and Xylitol Under a Biorefinery Concept

Authors: Astorga-Trejo Rebeca, Fonseca-Peralta Héctor Manuel, Beltrán-Arredondo Laura Ivonne, Castro-Martínez Claudia

Abstract:

The use of biomass as feedstock for the production of fuels and other chemicals of interest is an ever-growing accepted option in the way to the development of biorefinery complexes; in the Mexican state of Sinaloa, two million tons of residues from corn crops are produced every year, most of which can be converted to bioethanol and other products through biotechnological conversion using yeast and other microorganisms. Therefore, the objective of this work was to take advantage of corn stover and evaluate its potential as a substrate for the production of second-generation bioethanol (2G), enzymes, and xylitol. To produce bioethanol 2G, an acid-alkaline pretreatment was carried out prior to saccharification and fermentation. The microorganisms used for the production of enzymes, as well as for the production of xylitol, were isolated and characterized in our workgroup. Statistical analysis was performed using Design Expert version 11.0. The results showed that it is possible to obtain 2G bioethanol employing corn stover as a carbon source and Saccharomyces cerevisiae ItVer01 and Candida intermedia CBE002 with yields of 0.42 g and 0.31 g, respectively. It was also shown that C. intermedia has the ability to produce xylitol with a good yield (0.46 g/g). On the other hand, qualitative and quantitative studies showed that the native strains of Fusarium equiseti (0.4 IU/mL - xylanase), Bacillus velezensis (1.2 IU/mL – xylanase and 0.4 UI/mL - amylase) and Penicillium funiculosum (1.5 IU / mL - cellulases) have the capacity to produce xylanases, amylases or cellulases using corn stover as raw material. This study allowed us to demonstrate that it is possible to use corn stover as a carbon source, a low-cost raw material with high availability in our country, to obtain bioproducts of industrial interest, using processes that are more environmentally friendly and sustainable. It is necessary to continue the optimization of each bioprocess.

Keywords: biomass, corn stover, biorefinery, bioethanol 2G, enzymes, xylitol

Procedia PDF Downloads 135
1056 Physical, Microstructural and Functional Quality Improvements of Cassava-Sorghum Composite Snacks

Authors: Adil Basuki Ahza, Michael Liong, Subarna Suryatman

Abstract:

Healthy chips now dominating the snack market shelves. More than 80% processed snack foods in the market are chips. This research takes the advantages of twin extrusion technology to produce two types of product, i.e. directly expanded and intermediate ready-to-fry or microwavable chips. To improve the functional quality, the cereal-tuber based mix was enriched with antioxidant rich mix of temurui, celery, carrot and isolated soy protein (ISP) powder. Objectives of this research were to find best composite cassava-sorghum ratio, i.e. 60:40, 70:30 and 80:20, to optimize processing conditions of extrusion and study the microstructural, physical and sensorial characteristics of the final products. Optimization was firstly done by applying metering section of extruder barrel temperatures of 120, 130 and 140 °C with screw speeds of 150, 160 and 170 rpm to produce direct expanded product. The intermediate product was extruded in 100 °C and 100 rpm screw speed with feed moisture content of 35, 40 and 45%. The directly expanded products were analyzed for color, hardness, density, microstructure, and organoleptic properties. The results showed that interaction of ratio of cassava-sorghum and cooking methods affected the product's color, hardness, and bulk density (p<0.05). Extrusion processing conditions also significantly affected product's microstructure (p<0.05). The direct expanded snacks of 80:20 cassava-sorghum ratio and fried expanded one 70:30 and 80:20 ratio shown the best organoleptic score (slightly liked) while baking the intermediate product with microwave were resulted sensorial not acceptable quality chips.

Keywords: cassava-sorghum composite, extrusion, microstructure, physical characteristics

Procedia PDF Downloads 246
1055 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models

Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton

Abstract:

Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.

Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets

Procedia PDF Downloads 401
1054 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation

Procedia PDF Downloads 199
1053 Solid State Fermentation Process Development for Trichoderma asperellum Using Inert Support in a Fixed Bed Fermenter

Authors: Mauricio Cruz, Andrés Díaz García, Martha Isabel Gómez, Juan Carlos Serrato Bermúdez

Abstract:

The disadvantages of using natural substrates in SSF processes have been well recognized and mainly are associated to gradual decomposition of the substrate, formation of agglomerates and decrease of porosity bed generating limitations in the mass and heat transfer. Additionally, in several cases, materials with a high agricultural value such as sour milk, beets, rice, beans and corn have been used. Thus, the use of economic inert supports (natural or synthetic) in combination with a nutrient suspension for the production of biocontrol microorganisms is a good alternative in SSF processes, but requires further studies in the fields of modeling and optimization. Therefore, the aim of this work is to compare the performance of two inert supports, a synthetic (polyurethane foam) and a natural one (rice husk), identifying the factors that have the major effects on the productivity of T. asperellum Th204 and the maximum specific growth rate in a PROPHYTA L05® fixed bed bioreactor. For this, the six factors C:N ratio, temperature, inoculation rate, bed height, air moisture content and airflow were evaluated using a fractional design. The factors C:N and air flow were identified as significant on the productivity (expressed as conidia/dry substrate•h). The polyurethane foam showed higher maximum specific growth rate (0.1631 h-1) and productivities of 3.89 x107 conidia/dry substrate•h compared to rice husk (2.83x106) and natural substrate based on rice (8.87x106) used as control. Finally, a quadratic model was generated and validated, obtaining productivities higher than 3.0x107 conidia/dry substrate•h with air flow at 0.9 m3/h and C:N ratio at 18.1.

Keywords: bioprocess, scale up, fractional design, C:N ratio, air flow

Procedia PDF Downloads 482
1052 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: classifier ensemble, breast cancer survivability, data mining, SEER

Procedia PDF Downloads 295
1051 A Multi-Templated Fe-Ni-Cu Ion Imprinted Polymer for the Selective and Simultaneous Removal of Toxic Metallic Ions from Wastewater

Authors: Morlu Stevens, Bareki Batlokwa

Abstract:

The use of treated wastewater is widely employed to compensate for the scarcity of safe and uncontaminated freshwater. However, the existence of toxic heavy metal ions in the wastewater pose a health hazard to animals and the environment, hence, the importance for an effective technique to tackle the challenge. A multi-templated ion imprinted sorbent (Fe,Ni,Cu-IIP) for the simultaneous removal of heavy metal ions from waste water was synthesised employing molecular imprinting technology (MIT) via thermal free radical bulk polymerization technique. Methacrylic acid (MAA) was employed as the functional monomer, and ethylene glycol dimethylacrylate (EGDMA) as cross-linking agent, azobisisobutyronitrile (AIBN) as the initiator, Fe, Ni, Cu ions as template ions, and 1,10-phenanthroline as the complexing agent. The template ions were exhaustively washed off the synthesized polymer by solvent extraction in several washing steps, while periodically increasing solvent (HCl) concentration from 1.0 M to 10.0 M. The physical and chemical properties of the sorbents were investigated using Fourier Transform Infrared Spectroscopy (FT-IR), X-ray Diffraction (XRD) and Atomic Force Microscopy (AFM) were employed. Optimization of operational parameters such as time, pH and sorbent dosage to evaluate the effectiveness of sorbents were investigated and found to be 15 min, 7.5 and 666.7 mg/L respectively. Selectivity of ion-imprinted polymers and competitive sorption studies between the template and similar ions were carried out and showed good selectivity towards the targeted metal ion by removing 90% - 98% of the templated ions as compared to 58% - 62% of similar ions. The sorbents were further applied for the selective removal of Fe, Ni and Cu from real wastewater samples and recoveries of 92.14 ± 0.16% - 106.09 ± 0.17% and linearities of R2 = 0.9993 - R2 = 0.9997 were achieved.

Keywords: ion imprinting, ion imprinted polymers, heavy metals, wastewater

Procedia PDF Downloads 284
1050 Exergy Analysis of a Green Dimethyl Ether Production Plant

Authors: Marcello De Falco, Gianluca Natrella, Mauro Capocelli

Abstract:

CO₂ capture and utilization (CCU) is a promising approach to reduce GHG(greenhouse gas) emissions. Many technologies in this field are recently attracting attention. However, since CO₂ is a very stable compound, its utilization as a reagent is energetic intensive. As a consequence, it is unclear whether CCU processes allow for a net reduction of environmental impacts from a life cycle perspective and whether these solutions are sustainable. Among the tools to apply for the quantification of the real environmental benefits of CCU technologies, exergy analysis is the most rigorous from a scientific point of view. The exergy of a system is the maximum obtainable work during a process that brings the system into equilibrium with its reference environment through a series of reversible processes in which the system can only interact with such an environment. In other words, exergy is an “opportunity for doing work” and, in real processes, it is destroyed by entropy generation. The exergy-based analysis is useful to evaluate the thermodynamic inefficiencies of processes, to understand and locate the main consumption of fuels or primary energy, to provide an instrument for comparison among different process configurations and to detect solutions to reduce the energy penalties of a process. In this work, the exergy analysis of a process for the production of Dimethyl Ether (DME) from green hydrogen generated through an electrolysis unit and pure CO₂ captured from flue gas is performed. The model simulates the behavior of all units composing the plant (electrolyzer, carbon capture section, DME synthesis reactor, purification step), with the scope to quantify the performance indices based on the II Law of Thermodynamics and to identify the entropy generation points. Then, a plant optimization strategy is proposed to maximize the exergy efficiency.

Keywords: green DME production, exergy analysis, energy penalties, exergy efficiency

Procedia PDF Downloads 213
1049 The Application of Participatory Social Media in Collaborative Planning: A Systematic Review

Authors: Yujie Chen , Zhen Li

Abstract:

In the context of planning transformation, how to promote public participation in the formulation and implementation of collaborative planning has been the focused issue of discussion. However, existing studies have often been case-specific or focused on a specific design field, leaving the role of participatory social media (PSM) in urban collaborative planning generally questioned. A systematic database search was conducted in December 2019. Articles and projects were eligible if they reported a quantitative empirical study applying participatory social media in the collaborative planning process (a prospective, retrospective, experimental, longitudinal research, or collective actions in planning practices). Twenty studies and seven projects were included in the review. Findings showed that social media are generally applied in public spatial behavior, transportation behavior, and community planning fields, with new technologies and new datasets. PSM has provided a new platform for participatory design, decision analysis, and collaborative negotiation most widely used in participatory design. Findings extracted several existing forms of PSM. PSM mainly act as three roles: the language of decision-making for communication, study mode for spatial evaluation, and decision agenda for interactive decision support. Three optimization content of PSM were recognized, including improving participatory scale, improvement of the grass-root organization, and promotion of politics. However, basically, participants only could provide information and comment through PSM in the future collaborative planning process, therefore the issues of low data response rate, poor spatial data quality, and participation sustainability issues worth more attention and solutions.

Keywords: participatory social media, collaborative planning, planning workshop, application mode

Procedia PDF Downloads 106
1048 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS

Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren

Abstract:

An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.

Keywords: lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS

Procedia PDF Downloads 181
1047 Improvement of Artemisinin Production by P. indica in Hairy Root Cultures of A. annua L.

Authors: Seema Ahlawat, Parul Saxena, Malik Zainul Abdin

Abstract:

Malaria is a major health problem in many developing countries. The parasite responsible for the vast majority of fatal malaria infections is Plasmodium falciparum. Unfortunately, most Plasmodium strains including P. falciparum have become resistant to most of the antimalarials including chloroquine, mefloquine, etc. To combat this problem, WHO has recommended the use of artemisinin and its derivatives in artemisinin based combination therapy (ACT). Due to its current use in artemisinin based-combination therapy (ACT), its global demand is increasing continuously. But, the relatively low yield of artemisinin in A. annua L. plants and unavailability of economically viable synthetic protocols are the major bottlenecks for its commercial production and clinical use. Chemical synthesis of artemisinin is also very complex and uneconomical. The hairy root system, using the Agrobacterium rhizogenes LBA 9402 strain to enhance the production of artemisinin in A. annua L., is developed in our laboratory. The transgenic nature of hairy root lines and the copy number of trans gene (rol B) were confirmed using PCR and Southern Blot analyses, respectively. The effect of different concentrations of Piriformospora indica on artemisinin production in hairy root cultures were evaluated. 3% P. indica has resulted 1.97 times increase in artemisinin production in comparison to control cultures. The effects of P. indica on artemisinin production was positively correlated with regulatory genes of MVA, MEP and artemisinin biosynthetic pathways, viz. hmgr, ads, cyp71av1, aldh1, dxs, dxr and dbr2 in hairy root cultures of A. annua L. Mass scale cultivation of A. annua L. hairy roots by plant tissue culture technology may be an alternative route for production of artemisinin. A comprehensive investigation of the hairy root system of A. annua L. would help in developing a viable process for the production of artemisinin. The efficiency of the scaling up systems still needs optimization before industrial exploitation becomes viable.

Keywords: A. annua L., artemisinin, hairy root cultures, malaria

Procedia PDF Downloads 391
1046 Strained Channel Aluminum Nitride/Gallium Nitride Heterostructures Homoepitaxially Grown on Aluminum Nitride-On-Sapphire Template by Plasma-Assisted Molecular Beam Epitaxy

Authors: Jiajia Yao, GuanLin Wu, Fang liu, JunShuai Xue, JinCheng Zhang, Yue Hao

Abstract:

Due to its outstanding material properties like high thermal conductivity and ultra-wide bandgap, Aluminum nitride (AlN) has the promising potential to provide high breakdown voltage and high output power among III-nitrides for various applications in electronics and optoelectronics. This work presents material growth and characterization of strained channel Aluminum nitride/Gallium nitride (AlN/GaN) heterostructures grown by plasma-assisted molecular beam epitaxy (PA-MBE) on AlN-on-sapphire templates. To improve the crystal quality and manifest the ability of the PA-MBE approach, a thick AlN buffer with a thickness of 180 nm is first grown on AlN template, which acts as a back-barrier to enhance the breakdown characteristic and isolates the leakage path existing in the interface between AlN epilayer and AlN template, as well as improve the heat dissipation. The grown AlN buffer features a root-mean-square roughness of 0.2 nm over a scanned area of 2×2 µm2 measured by atomic force microscopy (AFM), and exhibits full-width at half-maximum of 95 and 407 arcsec for the (002) and (102) plane the X-ray rocking curve, respectively, tested by high resolution x-ray diffraction (HR-XRD). With a thin and strained GaN channel, the electron mobility of 294 cm2 /Vs. with a carrier concentration of 2.82×1013 cm-2 at room temperature is achieved in AlN/GaN double-channel heterostructures, and the depletion capacitance is as low as 14 pF resolved by the capacitance-voltage, which indicates the promising opportunities for future applications in next-generation high temperature, high-frequency and high-power electronics with a further increased electron mobility by optimization of heterointerface quality.

Keywords: AlN/GaN, HEMT, MBE, homoepitaxy

Procedia PDF Downloads 69
1045 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: liquefaction, Quercus cerris, polyalcohol liquefaction, temperature

Procedia PDF Downloads 303
1044 Optimizing Nitrogen Fertilizer Application in Rice Cultivation: A Decision Model for Top and Ear Dressing Dosages

Authors: Ya-Li Tsai

Abstract:

Nitrogen is a vital element crucial for crop growth, significantly influencing crop yield. In rice cultivation, farmers often apply substantial nitrogen fertilizer to maximize yields. However, excessive nitrogen application increases the risk of lodging and pest infestation, leading to yield losses. Additionally, conventional flooded irrigation methods consume significant water resources, necessitating precise agricultural and intelligent water management systems. In this study, it leveraged physiological data and field images captured by unmanned aerial vehicles, considering fertilizer treatment and irrigation as key factors. Statistical models incorporating rice physiological data, yield, and vegetation indices from image data were developed. Missing physiological data were addressed using multiple imputation and regression methods, and regression models were established using principal component analysis and stepwise regression. Target nitrogen accumulation at key growth stages was identified to optimize fertilizer application, with the difference between actual and target nitrogen accumulation guiding recommendations for ear dressing dosage. Field experiments conducted in 2022 validated the recommended ear dressing dosage, demonstrating no significant difference in final yield compared to traditional fertilizer levels under alternate wetting and drying irrigation. These findings highlight the efficacy of applying recommended dosages based on fertilizer decision models, offering the potential for reduced fertilizer use while maintaining yield in rice cultivation.

Keywords: intelligent fertilizer management, nitrogen top and ear dressing fertilizer, rice, yield optimization

Procedia PDF Downloads 27
1043 Modeling Stream Flow with Prediction Uncertainty by Using SWAT Hydrologic and RBNN Neural Network Models for Agricultural Watershed in India

Authors: Ajai Singh

Abstract:

Simulation of hydrological processes at the watershed outlet through modelling approach is essential for proper planning and implementation of appropriate soil conservation measures in Damodar Barakar catchment, Hazaribagh, India where soil erosion is a dominant problem. This study quantifies the parametric uncertainty involved in simulation of stream flow using Soil and Water Assessment Tool (SWAT), a watershed scale model and Radial Basis Neural Network (RBNN), an artificial neural network model. Both the models were calibrated and validated based on measured stream flow and quantification of the uncertainty in SWAT model output was assessed using ‘‘Sequential Uncertainty Fitting Algorithm’’ (SUFI-2). Though both the model predicted satisfactorily, but RBNN model performed better than SWAT with R2 and NSE values of 0.92 and 0.92 during training, and 0.71 and 0.70 during validation period, respectively. Comparison of the results of the two models also indicates a wider prediction interval for the results of the SWAT model. The values of P-factor related to each model shows that the percentage of observed stream flow values bracketed by the 95PPU in the RBNN model as 91% is higher than the P-factor in SWAT as 87%. In other words the RBNN model estimates the stream flow values more accurately and with less uncertainty. It could be stated that RBNN model based on simple input could be used for estimation of monthly stream flow, missing data, and testing the accuracy and performance of other models.

Keywords: SWAT, RBNN, SUFI 2, bootstrap technique, stream flow, simulation

Procedia PDF Downloads 328
1042 PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer

Authors: Rhea Kapoor

Abstract:

Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.

Keywords: fractals, histopathological analysis, image processing, lung cancer, Minkowski dimension

Procedia PDF Downloads 140