Search results for: MSW quantity prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3110

Search results for: MSW quantity prediction

800 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 205
799 Adaptive Strategies to Nutrient Deficiency of Doubled Diploid Citrumelo 4475: A Prospective Study Based on Structural, Ultrastructural, Physiological and Biochemical Parameters

Authors: J. Oustric, L. Berti, J. Santini

Abstract:

Nowadays, the objective of durable agriculture, and in particular organic agriculture, is to reduce the level of fertilizer inputs used in crops. Limiting the quantity of fertilizer inputs would optimize the economical result and minimizing the environmental impact. Nutrient deficiency, particularly of a major nutrient (N, P, and K), can seriously affect fruit production and quality. In citrus crops, rootstock/scion combinations. In citrus crop, scion/rootstock combinations are used frequently to improve tolerance to various abiotic stresses. New rootstocks are needed to respond to these constraints, and the use of new tetraploid rootstocks better adapted to lower nutrient intake could offer a promising way forward. The aim of this work was to determine whether a better tolerance to nutrient deficiency could be observed in a doubled diploid seedling and whether this tolerance could be observed in common clementine scion if used as rootstocks. We selected diploid (CM2x) and doubled diploid (CM4x) Citrumelo 4475 seedlings and common clementine (C) grafted onto Citrumelo 4475 diploid (C/CM2x) and doubled diploid (C/CM4x) rootstocks. Nutrient deficiency effects on the seedlings and scion/rootstock combinations were analyzed by studying anatomical, structural and ultrastructural determinants (chlorosis, stomata, ostiole and cells and their organelles), photosynthetic properties (leaf net photosynthetic rate (Pₙₑₜ), stomatal conductance (gₛ), chlorophyll a fluorescence (Fᵥ/Fₘ)) and oxidative marker (malondialdehyde). Nutrient deficiency affected differently foliar tissues, physiological parameters, and oxidative metabolism in leaves of seedlings depending on their ploidy level and of common clementine scion depending on their rootstocks ploidy level. Both CM4x and C/CM4x presented lower foliar damages (chlorosis, chloroplasts, mitochondria, and plastoglobuli), photosynthesis processes alteration (Pₙₑₜ, gₛ, and Fᵥ/Fₘ), and malondialdehyde accumulation than CM2x and C/CM2x after nutrient deficiency. Doubled diploid Citrumelo 4475 can improve nutrient deficiency tolerance, and its use as a rootstock allows to confer this tolerance to the common clementine scion.

Keywords: nutrient deficiency, oxidative stress, photosynthesis, polyploid rootstocks

Procedia PDF Downloads 106
798 Flow Characteristics around Rectangular Obstacles with the Varying Direction of Obstacles

Authors: Hee-Chang Lim

Abstract:

The study aims to understand the surface pressure distribution around the bodies such as the suction pressure in the leading edge on the top and side-face when the aspect ratio of bodies and the wind direction are changed, respectively. We carried out the wind tunnel measurement and numerical simulation around a series of rectangular bodies (40d×80w×80h, 80d×80w×80h, 160d×80w×80h, 80d×40w×80h and 80d×160w×80h in mm3) placed in a deep turbulent boundary layer. Based on a modern numerical platform, the Navier-Stokes equation with the typical 2-equation (k-ε model) and the DES (Detached Eddy Simulation) turbulence model has been calculated, and they are both compared with the measurement data. Regarding the turbulence model, the DES model makes a better prediction comparing with the k-ε model, especially when calculating the separated turbulent flow around a bluff body with sharp edged corner. In order to observe the effect of wind direction on the pressure variation around the cube (e.g., 80d×80w×80h in mm), it rotates at 0º, 10º, 20º, 30º, and 45º, which stands for the salient wind directions in the tunnel. The result shows that the surface pressure variation is highly dependent upon the approaching wind direction, especially on the top and the side-face of the cube. In addition, the transverse width has a substantial effect on the variation of surface pressure around the bodies, while the longitudinal length has little or no influence.

Keywords: rectangular bodies, wind direction, aspect ratio, surface pressure distribution, wind-tunnel measurement, k-ε model, DES model, CFD

Procedia PDF Downloads 205
797 The Possibility of Using Somatosensory Evoked Potential(SSEP) as a Parameter for Cortical Vascular Dementia

Authors: Hyunsik Park

Abstract:

As the rate of cerebrovascular disease increases in old populations, the prevalence rate of vascular dementia would be expected. Therefore, authors designed this study to find out the possibility of somatosensory evoked potentials(SSEP) as a parameter for early diagnosis and prognosis prediction of vascular dementia in cortical vascular dementia patients. 21 patients who met the criteria for vascular dementia according to DSM-IV,ICD-10and NINDS-AIREN with the history of recent cognitive impairment, fluctuation progression, and neurologic deficit. We subdivided these patients into two groups; a mild dementia and a severe dementia groups by MMSE and CDR score; and analysed comparison between normal control group and patient control group who have been cerebrovascular attack(CVA) history without dementia by using N20 latency and amplitude of median nerve. In this study, mild dementia group showed significant differences on latency and amplitude with normal control group(p-value<0.05) except patient control group(p-value>0.05). Severe dementia group showed significant differences both normal control group and patient control group.(p-value<0.05, <001). Since no significant difference has founded between mild dementia group and patient control group, SSEP has limitation to use for early diagnosis test. However, the comparison between severe dementia group and others showed significant results which indicate SSEP can predict the prognosis of vascular dementia in cortical vascular dementia patients.

Keywords: SSEP, cortical vascular dementia, N20 latency, N20 amplitude

Procedia PDF Downloads 278
796 Improving the Digestibility of Agro-Industrial Co-Products by Treatment with Isolated Fungi in the Meknes-Morocco Region

Authors: Mohamed Benaddou, Mohammed Diouri

Abstract:

country, such as Morocco, generates a high quantity of agricultural and food industry residues. A large portion of these residues is disposed of by burning or landfilling. The valorization of this waste biomass as feed is an interesting alternative because it is therefore considered among the best sources of cheap carbohydrates. However, its nutritional yield without any pre-treatment is very low because lignin protects cellulose, the carbohydrate used as a source of energy by ruminants. Fungal treatment is an environmentally friendly, easy and inexpensive method. This study investigated the treatment of wheat straw (WS), cedar sawdust (CS) and olive pomace (OP) with fungi selected according to the source of Carbon for improving its digestibility. Two were selected in a culture medium in which cellulose was the only source of Carbon: Cosmospora Viridescens (C.vir) and Penicillium crustosum (P.crus), two were selected in a culture medium in which lignin is the only source of Carbon: Fusarium oxysporum (F.oxy) and Fusarium sp. (F. Sp), and two in a culture medium where cellulose and lignin are the two sources of Carbon at the same time: Fusarium solani (F. solani) and Penicillium chrysogenum (P.chryso). P.chryso degraded more CS cellulose. It is very important to notice that the delignification by F. Solani reached 70% after 12 weeks of treatment of wheat straw. Ligninase enzymatic was detected in F.solani, F.sp, F.oxysporum, which made it possible to delignify the treated substrates. Delignification by C.vir is negligible in all three substrates after 12 weeks of treatment. P.crus and P.chryso degraded the lignin very slightly in WC (it did not exceed 12% after 12 weeks of treatment) but in OP this delignification is slight reaching 25% and 13% for P.chryso and P.crus successively. P.chryso allowed 30% degradation of lignin from 4 weeks of treatment. The degradation of the lignin was able to reach the maximum within 8 weeks of treatment for most of the fungi except F. solani who continued the treatment after this period. Digestibility variation (IVTD.variation) is highly very significant from fungus to fungi, duration to time, substrate to substrate and its interactions (P <0.001). indeed, all the fungi increased digestibility after 12 weeks of treatment with a difference in the degree of this increase. F.solani and F.oxy increased digestibility more than the others. this digestibility exceeded 50% in CS and O.P but did not exceed 20% for WS after treatment with F.oxy. IVTD.Var was not exceeded 20% in W.S.cedar treated with P.chryso but reached 45% after 8 weeks of treatment in W.straw.

Keywords: lignin, cellulose, digestibility, fungi, treatment, lignocellulosic biomass

Procedia PDF Downloads 183
795 Development of Biodegradable Wound Healing Patch of Curcumin

Authors: Abhay Asthana, Shally Toshkhani, Gyati Shilakari

Abstract:

The objective of the present research work is to develop a topical biodegradable dermal patch based formulation to aid accelerated wound healing. It is always better for patient compliance to be able to reduce the frequency of dressings with improved drug delivery and overall therapeutic efficacy. In present study optimized formulation using biodegradable components was obtained evaluating polymers and excipients (HPMC K4M, Ethylcellulose, Povidone, Polyethylene glycol and Gelatin) to impart significant folding endurance, elasticity, and strength. Molten gelatin was used to get a mixture using ethylene glycol. Chitosan dissolved in acidic medium was mixed with stirring to Gelatin mixture. With continued stirring to the mixture Curcumin was added with the aid of DCM and Methanol in an optimized ratio of 60:40 to get homogenous dispersion. Polymers were dispersed with stirring in the final formulation. The mixture was sonicated casted to get the film form. All steps were carried out under strict aseptic conditions. The final formulation was a thin uniformly smooth textured film with dark brown-yellow color. The film was found to have folding endurance was around 20 to 21 times without a crack in an optimized formulation at RT (23°C). The drug content was in range 96 to 102% and it passed the content uniform test. The final moisture content of the optimized formulation film was NMT 9.0%. The films passed stability study conducted at refrigerated conditions (4±0.2°C) and at room temperature (23 ± 2°C) for 30 days. Further, the drug content and texture remained undisturbed with stability study conducted at RT 23±2°C for 45 and 90 days. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as tested in vivo with correlation factor R2>0.9. In in vivo study administration of one dose in equivalent quantity per 2 days was applied topically. The data demonstrated a significant improvement with percentage wound contraction in contrast to control and plain drug respectively in given period. The film based formulation developed shows promising results in terms of stability and in vivo performance.

Keywords: wound healing, biodegradable, polymers, patch

Procedia PDF Downloads 451
794 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia

Authors: Halefom Kidane

Abstract:

This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.

Keywords: artificial neural networks, forecasting, min-max normalization, wind speed

Procedia PDF Downloads 44
793 Reliability Modeling on Drivers’ Decision during Yellow Phase

Authors: Sabyasachi Biswas, Indrajit Ghosh

Abstract:

The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.

Keywords: decision-making decision, dilemma zone, surrogate model, Kriging

Procedia PDF Downloads 287
792 Detection and Molecular Identification of Bacteria Forming Polyhydroxyalkanoate and Polyhydroxybutyrate Isolated from Soil in Saudi Arabia

Authors: Ali Bahkali, Rayan Yousef Booq, Mohammad Khiyami

Abstract:

Soil samples were collected from five different regions in the Kingdom of Saudi Arabia. Microbiological methods included dilution methods and pour plates to isolate and purify bacteria soil. The ability of isolates to develop biopolymer was investigated on petri dishes containing elements and substance concentrations stimulating developing biopolymer. Fluorescent stains, Nile red and Nile blue were used to stain the bacterial cells developing biopolymers. In addition, Sudan black was used to detect biopolymers in bacterial cells. The isolates which developed biopolymers were identified based on their gene sequence of 1 6sRNA and their ability to grow and synthesize PHAs on mineral medium supplemented with 1% dates molasses as the only carbon source under nitrogen limitation. During the study 293 bacterial isolates were isolated and detected. Through the initial survey on the petri dishes, 84 isolates showed the ability to develop biopolymers. These bacterial colonies developed a pink color due to accumulation of the biopolymers in the cells. Twenty-three isolates were able to grow on dates molasses, three strains of which showed the ability to accumulate biopolymers. These strains included Bacillus sp., Ralstonia sp. and Microbacterium sp. They were detected by Nile blue A stain with fluorescence microscopy (OLYMPUS IX 51). Among the isolated strains Ralstonia sp. was selected after its ability to grow on molasses dates in the presence of a limited nitrogen source was detected. The optimum conditions for formation of biopolymers by isolated strains were investigated. Conditions studied included, best incubation duration (2 days), temperature (30°C) and pH (7-8). The maximum PHB production was raised by 1% (v1v) when using concentrations of dates molasses 1, 2, 3, 4 and 5% in MSM. The best inoculated with 1% old inoculum (1= OD). The ideal extraction method of PHA and PHB proved to be 0.4% sodium hypochlorite solution, producing a quantity of polymer 98.79% of the cell's dry weight. The maximum PHB production was 1.79 g/L recorded by Ralstonia sp. after 48 h, while it was 1.40 g/L produced by R.eutropha ATCC 17697 after 48 h.

Keywords: bacteria forming polyhydroxyalkanoate, detection, molecular, Saudi Arabia

Procedia PDF Downloads 323
791 Development of Antioxidant Rich Bakery Products by Applying Lysine and Maillard Reaction Products

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

Due to the rapidly growing number of conscious customers in the recent years, more and more people look for products with positive physiological effects which may contribute to the preservation of their health. In response to these demands Food Science Research Institute of Budapest develops and introduces into the market new functional foods of guaranteed positive effect that contain bioactive agents. New, efficient technologies are also elaborated in order to preserve the maximum biological effect of the produced foods. The main objective of our work was the development of new functional biscuits fortified with physiologically beneficial ingredients. Bakery products constitute the base of the food nutrients’ pyramid, thus they might be regarded as foodstuffs of the largest consumed quantity. In addition to the well-known and certified physiological benefits of lysine, as an essential amino acid, a series of antioxidant type compounds is formed as a consequence of the occurring Maillard-reaction. Progress of the evoked Maillard-reaction was studied by applying diverse sugars (glucose, fructose, saccharose, isosugar) and lysine at several temperatures (120-170°C). Interval of thermal treatment was also varied (10-30 min). The composition and production technologies were tailored in order to reach the maximum of the possible biological benefits, so as to the highest antioxidant capacity in the biscuits. Out of the examined sugar components, theextent of the Maillard-reaction-driven transformation of glucose was the most pronounced at both applied temperatures. For the precise assessment of the antioxidant activity of the products FRAP and DPPH methods were adapted and optimised. To acquire an authentic and extensive mechanism of the occurring transformations, Maillard-reaction products were identified, and relevant reaction pathways were revealed. GC-MS and HPLC-MS techniques were applied for the analysis of the 60 generated MRPs and characterisation of actual transformation processes. 3 plausible major transformation routes might have been suggested based on the analytical result and the deductive sequence of possible occurring conversions between lysine and the sugars.

Keywords: Maillard-reaction, lysine, antioxidant activity, GC-MS and HPLC-MS techniques

Procedia PDF Downloads 456
790 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 268
789 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.

Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment

Procedia PDF Downloads 200
788 Monocular Depth Estimation Benchmarking with Thermal Dataset

Authors: Ali Akyar, Osman Serdar Gedik

Abstract:

Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.

Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers

Procedia PDF Downloads 15
787 Factors that Predict Pre-Service Teachers' Decision to Integrate E-Learning: A Structural Equation Modeling (SEM) Approach

Authors: Mohd Khairezan Rahmat

Abstract:

Since the impetus of becoming a develop country by the year 2020, the Malaysian government have been proactive in strengthening the integration of ICT into the national educational system. Teacher-education programs have the responsibility to prepare the nation future teachers by instilling in them the desire, confidence, and ability to fully utilized the potential of ICT into their instruction process. In an effort to fulfill this responsibility, teacher-education program are beginning to create alternatives means for preparing cutting-edge teachers. One of the alternatives is the student’s learning portal. In line with this mission, this study investigates the Faculty of Education, University Teknologi MARA (UiTM) pre-service teachers’ perception of usefulness, attitude, and ability toward the usage of the university learning portal, known as iLearn. The study also aimed to predict factors that might hinder the pre-service teachers’ decision to used iLearn as their platform in learning. The Structural Equation Modeling (SEM), was employed in analyzed the survey data. The suggested findings informed that pre-service teacher’s successful integration of the iLearn was highly influenced by their perception of usefulness of the system. The findings also suggested that the more familiar the pre-service teacher with the iLearn, the more possibility they will use the system. In light of similar study, the present findings hope to highlight the important to understand the user’s perception toward any proposed technology.

Keywords: e-learning, prediction factors, pre-service teacher, structural equation modeling (SEM)

Procedia PDF Downloads 300
786 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 263
785 Geometric, Energetic and Topological Analysis of (Ethanol)₉-Water Heterodecamers

Authors: Jennifer Cuellar, Angie L. Parada, Kevin N. S. Chacon, Sol M. Mejia

Abstract:

The purification of bio-ethanol through distillation methods is an unresolved issue at the biofuel industry because of the ethanol-water azeotrope formation, which increases the steps of the purification process and subsequently increases the production costs. Therefore, understanding the mixture nature at the molecular level could provide new insights for improving the current methods and/or designing new and more efficient purification methods. For that reason, the present study focuses on the evaluation and analysis of (ethanol)₉-water heterodecamers, as the systems with the minimum molecular proportion that represents the azeotropic concentration (96 %m/m in ethanol). The computational modelling was carried out with B3LYP-D3/6-311++G(d,p) in Gaussian 09. Initial explorations of the potential energy surface were done through two methods: annealing simulated runs and molecular dynamics trajectories besides intuitive structures obtained from smaller (ethanol)n-water heteroclusters, n = 7, 8 and 9. The energetic order of the seven stable heterodecamers determines the most stable heterodecamer (Hdec-1) as a structure forming a bicyclic geometry with the O-H---O hydrogen bonds (HBs) where the water is a double proton donor molecule. Hdec-1 combines 1 water molecule and the same quantity of every ethanol conformer; this is, 3 trans, 3 gauche 1 and 3 gauche 2; its abundance is 89%, its decamerization energy is -80.4 kcal/mol, i.e. 13 kcal/mol most stable than the less stable heterodecamer. Besides, a way to understand why methanol does not form an azeotropic mixture with water, analogous systems ((ethanol)10, (methanol)10, and (methanol)9-water)) were optimized. Topologic analysis of the electron density reveals that Hec-1 forms 33 weak interactions in total: 11 O-H---O, 8 C-H---O, 2 C-H---C hydrogen bonds and 12 H---H interactions. The strength and abundance of the most unconventional interactions (H---H, C-H---O and C-H---O) seem to explain the preference of the ethanol for forming heteroclusters instead of clusters. Besides, O-H---O HBs present a significant covalent character according to topologic parameters as the Laplacian of electron density and the relationship between potential and kinetic energy densities evaluated at the bond critical points; obtaining negatives values and values between 1 and 2, for those two topological parameters, respectively.

Keywords: ADMP, DFT, ethanol-water azeotrope, Grimme dispersion correction, simulated annealing, weak interactions

Procedia PDF Downloads 88
784 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life

Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi

Abstract:

Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.

Keywords: reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model

Procedia PDF Downloads 431
783 The Rational Design of Original Anticancer Agents Using Computational Approach

Authors: Majid Farsadrooh, Mehran Feizi-Dehnayebi

Abstract:

Serum albumin is the most abundant protein that is present in the circulatory system of a wide variety of organisms. Although it is a significant macromolecule, it can contribute to osmotic blood pressure and also, plays a superior role in drug disposition and efficiency. Molecular docking simulation can improve in silico drug design and discovery procedures to propound a lead compound and develop it from the discovery step to the clinic. In this study, the molecular docking simulation was applied to select a lead molecule through an investigation of the interaction of the two anticancer drugs (Alitretinoin and Abemaciclib) with Human Serum Albumin (HSA). Then, a series of new compounds (a-e) were suggested using lead molecule modification. Density functional theory (DFT) including MEP map and HOMO-LUMO analysis were used for the newly proposed compounds to predict the reactivity zones on the molecules, stability, and chemical reactivity. DFT calculation illustrated that these new compounds were stable. The estimated binding free energy (ΔG) values for a-e compounds were obtained as -5.78, -5.81, -5.95, -5,98, and -6.11 kcal/mol, respectively. Finally, the pharmaceutical properties and toxicity of these new compounds were estimated through OSIRIS DataWarrior software. The results indicated no risk of tumorigenic, irritant, or reproductive effects and mutagenicity for compounds d and e. As a result, compounds d and e, could be selected for further study as potential therapeutic candidates. Moreover, employing molecular docking simulation with the prediction of pharmaceutical properties helps to discover new potential drug compounds.

Keywords: drug design, anticancer, computational studies, DFT analysis

Procedia PDF Downloads 49
782 Assessing the Competitiveness of Green Charcoal Energy as an Alternative Source of Cooking Fuel in Uganda

Authors: Judith Awacorach, Quentin Gausset

Abstract:

Wood charcoal and firewood are the primary sources of energy for cooking fuel in most Sub-Saharan African countries, including Uganda. This leads to unsustainable forest use and to rapid deforestation. Green charcoal (made out of agricultural residues that are carbonized, reduced in char powder, and glued in briquettes, using a binder such as sugar molasse, cassava flour or clay) is a promising and sustainable alternative to wood charcoal and firewood. It is considered as renewable energy because the carbon emissions released by the combustion of green charcoal are immediately captured again in the next agricultural cycle. If practiced on a large scale, this has the potential to replace wood charcoal and stop deforestation. However, the uptake of green charcoal for cooking remains low in Uganda despite the introduction of the technology 15 years ago. The present paper reviews the barriers to the production and commercialization of green charcoal. The paper is based on the study of 13 production sites, recording the raw materials used, the production techniques, the quantity produced, the frequency of production, and the business model. Observations were made on each site, and interviews were conducted with the managers of the facilities and with one or two employees in the larger facilities. We also interviewed project administrators from four funding agencies interested in financing green charcoal production. The results of our research identify the main barriers as follows: 1) The price of green charcoal is not competitive (it is more labor and capital-intensive than wood charcoal). 2) There is a problem with quality control and labeling (one finds a wide variety of green charcoal with very different performances). 3) The carbonization of agricultural crop residues is a major bottleneck in green char production. Most briquettes are produced with wood charcoal dust or powder, which is a by-product of wood charcoal. As such, they increase the efficiency of wood charcoal but do not yet replace it. 4) There is almost no marketing chain for the product (most green charcoal is sold directly from producer to consumer without any middleman). 5) The financing institutions are reluctant to lend money for this kind of activity. 6) Storage can be challenging since briquettes can dissolve due to moisture. In conclusion, a number of important barriers need to be overcome before green charcoal can become a serious alternative to wood charcoal.

Keywords: briquettes, competitiveness, deforestation, green charcoal, renewable energy

Procedia PDF Downloads 22
781 MiRNA Regulation of CXCL12β during Inflammation

Authors: Raju Ranjha, Surbhi Aggarwal

Abstract:

Background: Inflammation plays an important role in infectious and non-infectious diseases. MiRNA is also reported to play role in inflammation and associated cancers. Chemokine CXCL12 is also known to play role in inflammation and various cancers. CXCL12/CXCR4 chemokine axis was involved in pathogenesis of IBD specially UC. Supplementation of CXCL12 induces homing of dendritic cells to spleen and enhances control of plasmodium parasite in BALB/c mice. We looked at the regulation of CXCL12β by miRNA in UC colitis. Prolonged inflammation of colon in UC patient increases the risk of developing colorectal cancer. We looked at the expression differences of CXCl12β and its targeting miRNA in cancer susceptible area of colon of UC patients. Aim: Aim of this study was to find out the expression regulation of CXCL12β by miRNA in inflammation. Materials and Methods: Biopsy samples and blood samples were collected from UC patients and non-IBD controls. mRNA expression was analyzed using microarray and real-time PCR. CXCL12β targeting miRNA were looked by using online target prediction tools. Expression of CXCL12β in blood samples and cell line supernatant was analyzed using ELISA. miRNA target was validated using dual luciferase assay. Results and conclusion: We found miR-200a regulate the expression of CXCL12β in UC. Expression of CXCL12β was increased in cancer susceptible part of colon and expression of its targeting miRNA was decreased in the same part of colon. miR-200a regulate CXCL12β expression in inflammation and may be an important therapeutic target in inflammation associated cancer.

Keywords: inflammation, miRNA, regulation, CXCL12

Procedia PDF Downloads 246
780 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance

Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.

Abstract:

The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, Philippines

Keywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure

Procedia PDF Downloads 72
779 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain

Authors: Jia Zhang, Fengmei Yao, Yanjing Tan

Abstract:

The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.

Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain

Procedia PDF Downloads 337
778 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process

Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes

Abstract:

Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.

Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting

Procedia PDF Downloads 132
777 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 408
776 Estimation of the Length and Location of Ground Surface Deformation Caused by the Reverse Faulting

Authors: Nader Khalafian, Mohsen Ghaderi

Abstract:

Field observations have revealed many examples of structures which were damaged due to ground surface deformation caused by the faulting phenomena. In this paper some efforts were made in order to estimate the length and location of the ground surface where large displacements were created due to the reverse faulting. This research has conducted in two steps; (1) in the first step, a 2D explicit finite element model were developed using ABAQUS software. A subroutine for Mohr-Coulomb failure criterion with strain softening model was developed by the authors in order to properly model the stress strain behavior of the soil in the fault rapture zone. The results of the numerical analysis were verified with the results of available centrifuge experiments. Reasonable coincidence was found between the numerical and experimental data. (2) In the second step, the effects of the fault dip angle (δ), depth of soil layer (H), dilation and friction angle of sand (ψ and φ) and the amount of fault offset (d) on the soil surface displacement and fault rupture path were investigated. An artificial neural network-based model (ANN), as a powerful prediction tool, was developed to generate a general model for predicting faulting characteristics. A properly sized database was created to train and test network. It was found that the length and location of the zone of displaced ground surface can be accurately estimated using the proposed model.

Keywords: reverse faulting, surface deformation, numerical, neural network

Procedia PDF Downloads 405
775 Determination of Direct Solar Radiation Using Atmospheric Physics Models

Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote

Abstract:

This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.

Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition

Procedia PDF Downloads 389
774 Proposal Method of Prediction of the Early Stages of Dementia Using IoT and Magnet Sensors

Authors: João Filipe Papel, Tatsuji Munaka

Abstract:

With society's aging and the number of elderly with dementia rising, researchers have been actively studying how to support the elderly in the early stages of dementia with the objective of allowing them to have a better life quality and as much as possible independence. To make this possible, most researchers in this field are using the Internet Of Things to monitor the elderly activities and assist them in performing them. The most common sensor used to monitor the elderly activities is the Camera sensor due to its easy installation and configuration. The other commonly used sensor is the sound sensor. However, we need to consider privacy when using these sensors. This research aims to develop a system capable of predicting the early stages of dementia based on monitoring and controlling the elderly activities of daily living. To make this system possible, some issues need to be addressed. First, the issue related to elderly privacy when trying to detect their Activities of Daily Living. Privacy when performing detection and monitoring Activities of Daily Living it's a serious concern. One of the purposes of this research is to achieve this detection and monitoring without putting the privacy of the elderly at risk. To make this possible, the study focuses on using an approach based on using Magnet Sensors to collect binary data. The second is to use the data collected by monitoring Activities of Daily Living to predict the early stages of Dementia. To make this possible, the research team suggests developing a proprietary ontology combined with both data-driven and knowledge-driven.

Keywords: dementia, activity recognition, magnet sensors, ontology, data driven and knowledge driven, IoT, activities of daily living

Procedia PDF Downloads 73
773 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image

Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa

Abstract:

A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.

Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever

Procedia PDF Downloads 88
772 Enhancing Rupture Pressure Prediction for Corroded Pipes Through Finite Element Optimization

Authors: Benkouiten Imene, Chabli Ouerdia, Boutoutaou Hamid, Kadri Nesrine, Bouledroua Omar

Abstract:

Algeria is actively enhancing gas productivity by augmenting the supply flow. However, this effort has led to increased internal pressure, posing a potential risk to the pipeline's integrity, particularly in the presence of corrosion defects. Sonatrach relies on a vast network of pipelines spanning 24,000 kilometers for the transportation of gas and oil. The aging of these pipelines raises the likelihood of corrosion both internally and externally, heightening the risk of ruptures. To address this issue, a comprehensive inspection is imperative, utilizing specialized scraping tools. These advanced tools furnish a detailed assessment of all pipeline defects. It is essential to recalculate the pressure parameters to safeguard the corroded pipeline's integrity while ensuring the continuity of production. In this context, Sonatrach employs symbolic pressure limit calculations, such as ASME B31G (2009) and the modified ASME B31G (2012). The aim of this study is to perform a comparative analysis of various limit pressure calculation methods documented in the literature, namely DNV RP F-101, SHELL, P-CORRC, NETTO, and CSA Z662. This comparative assessment will be based on a dataset comprising 329 burst tests published in the literature. Ultimately, we intend to introduce a novel approach grounded in the finite element method, employing ANSYS software.

Keywords: pipeline burst pressure, burst test, corrosion defect, corroded pipeline, finite element method

Procedia PDF Downloads 37
771 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 245