Search results for: mixture regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19328

Search results for: mixture regression model

18878 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 377
18877 Effect of Aeration on Co-Composting of Mixture of Food Waste with Sawdust and Sewage Sludge from Nicosia Waste Water Treatment Plant

Authors: Azad Khalid, Ime Akanyeti

Abstract:

About 68% of the urban solid waste generated in Turkish Republic of Northern Cyprus TRNC is household solid waste, at present, its disposal in landfills. In other hand more than 3000 ton per year of sewage sludge produces in Nicosia waste water treatment plant, the produced sludge piled up without any processing. Co-composting of organic fraction of municipal solid waste and sewage sludge is diverting of municipal solid waste from landfills and best disposal of wastewater sewage sludge. Three 10 L insulated bioreactor R1, R2 and R3 obtained with aeration rate 0.05 m3/h.kg for R2 and R3, R1 was without aeration. The mixture was destined with ratio of sewage sludge: food waste: sawdust; 1:5:0.8 (w/w). The effective of aeration monitored during 42 days of process through investigation in key parameter moisture, C/N ratio, temperature and pH. Results show that the high moisture content cause problem and around 60% recommend, C/N ratio decreased about 17% in aerated reactors and 10% in without aeration and mixture volume reduced in volume 40% in final compost with size of 1.00 to 20.0 mm. temperature in reactors with aeration reached thermophilic phase above 50 °C and <40 °C in without aeration. The final pH is 6.1 in R1, 8.23 in R2 and 8.1 in R3.

Keywords: aeration, sewage sludge, food waste, sawdust, composting

Procedia PDF Downloads 56
18876 The Role of Urban Development Patterns for Mitigating Extreme Urban Heat: The Case Study of Doha, Qatar

Authors: Yasuyo Makido, Vivek Shandas, David J. Sailor, M. Salim Ferwati

Abstract:

Mitigating extreme urban heat is challenging in a desert climate such as Doha, Qatar, since outdoor daytime temperature area often too high for the human body to tolerate. Recent studies demonstrate that cities in arid and semiarid areas can exhibit ‘urban cool islands’ - urban areas that are cooler than the surrounding desert. However, the variation of temperatures as a result of the time of day and factors leading to temperature change remain at the question. To address these questions, we examined the spatial and temporal variation of air temperature in Doha, Qatar by conducting multiple vehicle-base local temperature observations. We also employed three statistical approaches to model surface temperatures using relevant predictors: (1) Ordinary Least Squares, (2) Regression Tree Analysis and (3) Random Forest for three time periods. Although the most important determinant factors varied by day and time, distance to the coast was the significant determinant at midday. A 70%/30% holdout method was used to create a testing dataset to validate the results through Pearson’s correlation coefficient. The Pearson’s analysis suggests that the Random Forest model more accurately predicts the surface temperatures than the other methods. We conclude with recommendations about the types of development patterns that show the greatest potential for reducing extreme heat in air climates.

Keywords: desert cities, tree-structure regression model, urban cool Island, vehicle temperature traverse

Procedia PDF Downloads 367
18875 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 108
18874 Removal of Phenol from Aqueous Solution Using Watermelon (Citrullus C. lanatus) Rind

Authors: Fidelis Chigondo

Abstract:

This study focuses on investigating the effectiveness of watermelon rind in phenol removal from aqueous solution. The effects of various parameters (pH, initial phenol concentration, biosorbent dosage and contact time) on phenol adsorption were investigated. The pH of 2, initial phenol concentration of 40 ppm, the biosorbent dosage of 0.6 g and contact time of 6 h also deduced to be the optimum conditions for the adsorption process. The maximum phenol removal under optimized conditions was 85%. The sorption data fitted to the Freundlich isotherm with a regression coefficient of 0.9824. The kinetics was best described by the intraparticle diffusion model and Elovich Equation with regression coefficients of 1 and 0.8461 respectively showing that the reaction is chemisorption on a heterogeneous surface and the intraparticle diffusion rate only is the rate determining step. The study revealed that watermelon rind has a potential of removing phenol from industrial wastewaters.

Keywords: biosorption, phenol, biosorbent, watermelon rind

Procedia PDF Downloads 225
18873 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 47
18872 Spectral Analysis Approaches for Simultaneous Determination of Binary Mixtures with Overlapping Spectra: An Application on Pseudoephedrine Sulphate and Loratadine

Authors: Sara El-Hanboushy, Hayam Lotfy, Yasmin Fayez, Engy Shokry, Mohammed Abdelkawy

Abstract:

Simple, specific, accurate and precise spectrophotometric methods are developed and validated for simultaneous determination of pseudoephedrine sulphate (PSE) and loratadine (LOR) in combined dosage form based on spectral analysis technique. Pseudoephedrine (PSE) in binary mixture could be analyzed either by using its resolved zero order absorption spectrum at its λ max 256.8 nm after subtraction of LOR spectrum or in presence of LOR spectrum by absorption correction method at 256.8 nm, dual wavelength (DWL) method at 254nm and 273nm, induced dual wavelength (IDWL) method at 256nm and 272nm and ratio difference (RD) method at 256nm and 262 nm. Loratadine (LOR) in the mixture could be analyzed directly at 280nm without any interference of PSE spectrum or at 250 nm using its recovered zero order absorption spectrum using constant multiplication(CM).In addition, simultaneous determination for PSE and LOR in their mixture could be applied by induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM).

Keywords: dual wavelength (DW), induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM), loratadine, pseudoephedrine sulphate, ratio difference (RD)

Procedia PDF Downloads 293
18871 The Impact of Simulation-based Learning on the Clinical Self-efficacy and Adherence to Infection Control Practices of Nursing Students

Authors: Raeed Alanazi

Abstract:

Introduction: Nursing students have a crucial role to play in the inhibition of infectious diseases and, therefore, must be trained in infection control and prevention modules prior to entering clinical settings. Simulations have been found to have a positive impact on infection control skills and the use of standard precautions. Aim: The purpose of this study was to use the four sources of self-efficacy in explaining the level of clinical self-efficacy and adherence to infection control practices in Saudi nursing students during simulation practice. Method: A cross-sectional design with convenience sampling was used. This study was conducted in all Saudi nursing schools, with a total number of 197 students participated in this study. Three scales were used simulation self- efficacy Scale (SSES), the four sources of self-efficacy scale (SSES), and Compliance with Standard Precautions Scale (CSPS). Multiple linear regression was used to test the use of the four sources of self-efficacy (SSES) in explaining level of clinical self-efficacy and adherence to infection control in nursing students. Results: The vicarious experience subscale (p =.044) was statistically significant. The regression model indicated that for every one unit increase in vicarious experience (observation and reflection in simulation), the participants’ adherence to infection control increased by .13 units (β =.22, t = 2.03, p =.044). In addition, the regression model indicated that for every one unit increase in education level, the participants’ adherence to infection control increased by 1.82 units (beta=.34= 3.64, p <.001). Also, the mastery experience subscale (p <.001) and vicarious experience subscale (p = .020) were shared significant associations with clinical self-efficacy. Conclusion: The findings of this research support the idea that simulation-based learning can be a valuable teaching-learning method to help nursing students develop clinical competence, which is essential in providing quality and safe nursing care.

Keywords: simulation-based learning, clinical self-efficacy, infection control, nursing students

Procedia PDF Downloads 48
18870 Transient Electrical Resistivity and Elastic Wave Velocity of Sand-Cement-Inorganic Binder Mixture

Authors: Kiza Rusati Pacifique, Ki-il Song

Abstract:

The cement milk grout has been used for ground improvement. Due to the environmental issues related to cement, the reduction of cement usage is requesting. In this study, inorganic binder is introduced to reduce the use of cement contents for ground improvement. To evaluate transient electrical and mechanical properties of sand-cement-inorganic binder mixture, two non-destructive testing (NDT) methods, Electrical Resistivity (ER) and Free Free Resonant Column (FFRC) tests were adopted in addition to unconfined compressive strength test. Electrical resistivity, longitudinal wave velocity and damping ratio of sand-cement admixture samples improved with addition of inorganic binders were measured. Experimental tests were performed considering four different mixing ratios and three different cement contents depending on the curing time. Results show that mixing ratio and curing time have considerable effects on electrical and mechanical properties of mixture. Unconfined compressive strength (UCS) decreases as the cement content decreases. However, sufficient grout strength can be obtained with increase of content of inorganic binder. From the results, it is found that the inorganic binder can be used to enhance the mechanical properties of mixture and reduce the cement content. It is expected that data and trends proposed in this study can be used as reference in predicting grouting quality in the field.

Keywords: damping ratio, electrical resistivity, ground improvement, inorganic binder, longitudinal wave velocity, unconfined compression strength

Procedia PDF Downloads 323
18869 Measurement of Asphalt Pavement Temperature to Find out the Proper Asphalt Binder Performance Grade to the Asphalt Mixtures in Southern Desert of Libya

Authors: Khlifa El Atrash, Gabriel Assaf

Abstract:

Most developing countries use volumetric analysis in designing asphalt mixtures, which can also be upgraded in hot arid weather. However, in order to be effective, it should include many important aspects which are materials, environment, and method of construction. The overall intent of the work reported in this study is to test different asphalt mixtures while taking into consideration the environment, type and source of material, tools, equipment, and the construction method. In this study, several tests were conducted on many samples that were carefully prepared under the expected traffic loads and temperatures in a dry hot climate. Several asphalt concrete mixtures were designed using two different binders. These mixtures were analyzed under two types of tests - Complex Modulus and Rutting test - to evaluate the hot mix asphalt properties under the represented temperatures and traffic load in Libya. These factors play an important role to improve the pavement performances in a hot climate weather based on the properties of the asphalt mixture, climate, and traffic load. This research summarized some recommendations for making asphalt mixtures used in hot dry areas. Such asphalt mixtures should use asphalt binder which is less affected by pavement temperature change and traffic load. The properties of the mixture, such as durability, deformation, air voids and performance, largely depend on the type of materials, environment, and mixing method. These properties, in turn, affect the pavement performance. Therefore, this study is aimed to develop a method for designing an asphalt mixture that takes into account field loading, various stresses, and temperature spectrums.

Keywords: volumetric analysis, pavement performances, hot climate, asphalt mixture, traffic load

Procedia PDF Downloads 288
18868 Point-Mutation in a Rationally Engineered Esterase Inverts its Enantioselectivity

Authors: Yasser Gaber, Mohamed Ismail, Serena Bisagni, Mohamad Takwa, Rajni Hatti-Kaul

Abstract:

Enzymes are safe and selective catalysts. They skillfully catalyze chemical reactions; however, the native form is not usually suitable for industrial applications. Enzymes are therefore engineered by several techniques to meet the required catalytic task. Clopidogrel is recorded among the five best selling pharmaceutical in 2010 under the brand name Plavix. The commonly used route for production of the drug on an industrial scale is the synthesis of the racemic mixture followed by diastereomeric resolution to obtain the pure S isomer. The process consumes a lot of solvents and chemicals. We have evaluated a biocatalytic cleaner approach for asymmetric hydrolysis of racemic clopidogrel. Initial screening of a selected number of hydrolases showed only one enzyme EST to exhibit activity and selectivity towards the desired stereoisomer. As the crude EST is a mixture of several isoenzymes, a homology model of EST-1 was used in molecular dynamic simulations to study the interaction of the enzyme with R and S isomers of clopidogrel. Analysis of the geometric hindrances of the tetrahedral intermediates revealed a potential site for mutagenesis in order to improve the activity and the selectivity. Single point mutation showed dramatic increase in activity and inversion of the enantioselectivity (400 fold change in E value).

Keywords: biocatalysis, biotechnology, enzyme, protein engineering, molecular modeling

Procedia PDF Downloads 420
18867 Quantitative Structure Activity Relationship Model for Predicting the Aromatase Inhibition Activity of 1,2,3-Triazole Derivatives

Authors: M. Ouassaf, S. Belaidi

Abstract:

Aromatase is an estrogen biosynthetic enzyme belonging to the cytochrome P450 family, which catalyzes the limiting step in the conversion of androgens to estrogens. As it is relevant for the promotion of tumor cell growth. A set of thirty 1,2,3-triazole derivatives was used in the quantitative structure activity relationship (QSAR) study using regression multiple linear (MLR), We divided the data into two training and testing groups. The results showed a good predictive ability of the MLR model, the models were statistically robust internally (R² = 0.982) and the predictability of the model was tested by several parameters. including external criteria (R²pred = 0.851, CCC = 0.946). The knowledge gained in this study should provide relevant information that contributes to the origins of aromatase inhibitory activity and, therefore, facilitates our ongoing quest for aromatase inhibitors with robust properties.

Keywords: aromatase inhibitors, QSAR, MLR, 1, 2, 3-triazole

Procedia PDF Downloads 91
18866 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 279
18865 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults

Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu

Abstract:

The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.

Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method

Procedia PDF Downloads 426
18864 Cat Stool as an Additive Aggregate to Garden Bricks

Authors: Mary Joy B. Amoguis, Alonah Jane D. Labtic, Hyna Wary Namoca, Aira Jane V. Original

Abstract:

Animal waste has been rapidly increasing due to the growing animal population and the lack of innovative waste management practices. In a country like the Philippines, animal waste is rampant. This study aims to minimize animal waste by producing garden bricks using cat stool as an additive. The research study analyzes different levels of concentration to determine the most efficient combination in terms of compressive strength and durability of cat stool as an additive to garden bricks. The researcher's first collects the cat stool and incinerates the different concentrations. The first concentration is 25% cat stool and 75% cement mixture. The second concentration is 50% cat stool and 50% cement mixture. And the third concentration is 75% cat stool and 25% cement mixture. The researchers analyze the statistical data using one-way ANOVA, and the statistical analysis revealed a significant difference compared to the controlled variable. The research findings show an inversely proportional relationship: the higher the concentration of cat stool additive, the lower the compressive strength of the bricks, and the lower the concentration of cat stool additive, the higher the compressive strength of the bricks.

Keywords: cat stool, garden bricks, cement, concentrations, animal wastes, compressive strength, durability, one-way ANOVA, additive, incineration, aggregates, stray cats

Procedia PDF Downloads 31
18863 E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes

Authors: Leonard Maaya, Michel Meulders, Martina Vandebroek

Abstract:

Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.

Keywords: choice models, discrete choice experiments, dynamic models, e-commerce, statistical modeling

Procedia PDF Downloads 110
18862 Increase Daily Production Rate of Methane Through Pasteurization Cow Dung

Authors: Khalid Elbadawi Elshafea, Mahmoud Hassan Onsa

Abstract:

This paper presents the results of the experiments to measure the impact of pasteurization cows dung on important parameter of anaerobic digestion (retention time) and measure the effect in daily production rate of biogas, were used local materials in these experiments, two experiments were carried out in two bio-digesters (1 and 2) (18.0 L), volume of the mixture 16.0-litre and the mass of dry matter in the mixture 4.0 Kg of cow dung. Pasteurization process has been conducted on the mixture into the digester 2, and put two digesters under room temperature. Digester (1) produced 268.5 liter of methane in period of 49 days with daily methane production rate 1.37L/Kg/day, and digester (2) produced 302.7-liter of methane in period of 26 days with daily methane production rate 2.91 L/Kg/day. This study concluded that the use of system pasteurization cows dung speed up hydrolysis in anaerobic process, because heat to certain temperature in certain time lead to speed up chemical reactions (transfer Protein to Amino acids, Carbohydrate to Sugars and Fat to Long chain fatty acids), this lead to reduce the retention time an therefore increase the daily methane production rate with 212%.

Keywords: methane, cow dung, daily production, pasteurization, increase

Procedia PDF Downloads 282
18861 An Alternative Approach for Assessing the Impact of Cutting Conditions on Surface Roughness Using Single Decision Tree

Authors: S. Ghorbani, N. I. Polushin

Abstract:

In this study, an approach to identify factors affecting on surface roughness in a machining process is presented. This study is based on 81 data about surface roughness over a wide range of cutting tools (conventional, cutting tool with holes, cutting tool with composite material), workpiece materials (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). A single decision tree (SDT) analysis was done to identify factors for predicting a model of surface roughness, and the CART algorithm was employed for building and evaluating regression tree. Results show that a single decision tree is better than traditional regression models with higher rate and forecast accuracy and strong value.

Keywords: cutting condition, surface roughness, decision tree, CART algorithm

Procedia PDF Downloads 346
18860 Enhancing Performance of Semi-Flexible Pavements through Self-Compacting Cement Mortar as Cementitious Grout

Authors: Mohamed Islam Dahmani

Abstract:

This research investigates the performance enhancement of semi-flexible pavements by incorporating self-compacting cement mortar as a cementitious grout. The study is divided into three phases for comprehensive evaluation. In the initial phase, a porous asphalt mixture is formulated with a target voids content of 25-30%. The goal is to achieve optimal interconnected voids that facilitate effective penetration of self-compacting cement mortar. The mixture's compliance with porous asphalt performance standards is ensured through tests such as marshal stability, indirect tensile strength, contabro test, and draindown test. The second phase focuses on creating a self-compacting cement mortar with high workability and superior penetration capabilities. This mortar is designed to fill the interconnected voids within the porous asphalt mixture. The formulated mortar's characteristics are assessed through tests like mini V funnel flow time, slump flow mini cone, as well as mechanical properties such as compressive strength, bending strength, and shrinkage strength. In the final phase, the performance of the semi-flexible pavement is thoroughly studied. Various tests, including marshal stability, indirect tensile strength, high-temperature bending, low-temperature bending, resistance to rutting, and fatigue life, are conducted to assess the effectiveness of the self-compacting cement mortar-enhanced pavement.

Keywords: semi-flexible pavements, cementitious grout, self-compacting cement mortar, porous asphalt mixture, interconnected voids, rutting resistance

Procedia PDF Downloads 58
18859 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 118
18858 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling

Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang

Abstract:

Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.

Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model

Procedia PDF Downloads 112
18857 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 59
18856 Rural Livelihood under a Changing Climate Pattern in the Zio District of Togo, West Africa

Authors: Martial Amou

Abstract:

This study was carried out to assess the situation of households’ livelihood under a changing climate pattern in the Zio district of Togo, West Africa. The study examined three important aspects: (i) assessment of households’ livelihood situation under a changing climate pattern, (ii) farmers’ perception and understanding of local climate change, (iii) determinants of adaptation strategies undertaken in cropping pattern to climate change. To this end, secondary sources of data, and survey data collected from 235 farmers in four villages in the study area were used. Adapted conceptual framework from Sustainable Livelihood Framework of DFID, two steps Binary Logistic Regression Model and descriptive statistics were used in this study as methodological approaches. Based on Sustainable Livelihood Approach (SLA), various factors revolving around the livelihoods of the rural community were grouped into social, natural, physical, human, and financial capital. Thus, the study came up that households’ livelihood situation represented by the overall livelihood index in the study area (34%) is below the standard average households’ livelihood security index (50%). The natural capital was found as the poorest asset (13%) and this will severely affect the sustainability of livelihood in the long run. The result from descriptive statistics and the first step regression (selection model) indicated that most of the farmers in the study area have clear understanding of climate change even though they do not have any idea about greenhouse gases as the main cause behind the issue. From the second step regression (output model) result, education, farming experience, access to credit, access to extension services, cropland size, membership of a social group, distance to the nearest input market, were found to be the significant determinants of adaptation measures undertaken in cropping pattern by farmers in the study area. Based on the result of this study, recommendations are made to farmers, policy makers, institutions, and development service providers in order to better target interventions which build, promote or facilitate the adoption of adaptation measures with potential to build resilience to climate change and then improve rural livelihood.

Keywords: climate change, rural livelihood, cropping pattern, adaptation, Zio District

Procedia PDF Downloads 301
18855 Study on the Factors Influencing the Built Environment of Residential Areas on the Lifestyle Walking Trips of the Elderly

Authors: Daming Xu, Yuanyuan Wang

Abstract:

Abstract: Under the trend of rapid expansion of urbanization, the motorized urban characteristics become more and more obvious, and the walkability of urban space is seriously affected. The construction of walkability of space, as the main mode of travel for the elderly in their daily lives, has become more and more important in the current social context of serious aging. Settlement is the most basic living unit of residents, and daily shopping, medical care, and other daily trips are closely related to the daily life of the elderly. Therefore, it is of great practical significance to explore the impact of built environment on elderly people's daily walking trips at the settlement level for the construction of pedestrian-friendly settlements for the elderly. The study takes three typical settlements in Harbin Daoli District in three different periods as examples and obtains data on elderly people's walking trips and built environment characteristics through field research, questionnaire distribution, and internet data acquisition. Finally, correlation analysis and multinomial logistic regression model were applied to analyze the influence mechanism of built environment on elderly people's walkability based on the control of personal attribute variables in order to provide reference and guidance for the construction of walkability for elderly people in built environment in the future.

Keywords: built environment, elderly, walkability, multinomial logistic regression model

Procedia PDF Downloads 52
18854 Bivariate Time-to-Event Analysis with Copula-Based Cox Regression

Authors: Duhania O. Mahara, Santi W. Purnami, Aulia N. Fitria, Merissa N. Z. Wirontono, Revina Musfiroh, Shofi Andari, Sagiran Sagiran, Estiana Khoirunnisa, Wahyudi Widada

Abstract:

For assessing interventions in numerous disease areas, the use of multiple time-to-event outcomes is common. An individual might experience two different events called bivariate time-to-event data, the events may be correlated because it come from the same subject and also influenced by individual characteristics. The bivariate time-to-event case can be applied by copula-based bivariate Cox survival model, using the Clayton and Frank copulas to analyze the dependence structure of each event and also the covariates effect. By applying this method to modeling the recurrent event infection of hemodialysis insertion on chronic kidney disease (CKD) patients, from the AIC and BIC values we find that the Clayton copula model was the best model with Kendall’s Tau is (τ=0,02).

Keywords: bivariate cox, bivariate event, copula function, survival copula

Procedia PDF Downloads 50
18853 Adsorption of Peppermint Essential Oil by Polypropylene Nanofiber

Authors: Duduku Krishnaiah, S. M. Anisuzzaman, Kumaran Govindaraj, Chiam Chel Ken, Zykamilia Kamin

Abstract:

Pure essential oil is highly demanded in the market since most of the so-called pure essential oils in the market contains alcohol. This is because of the usage of alcohol in separating oil and water mixture. Removal of pure essential oil from water without using any chemical solvent has become a challenging issue. Adsorbents generally have the properties of separating hydrophobic oil from hydrophilic mixture. Polypropylen nanofiber is a thermoplastic polymer which is produced from propylene. It was used as an adsorbent in this study. Based on the research, it was found that the polypropylene nanofiber was able to adsorb peppermint oil from the aqueous solution over a wide range of concentration. Based on scanning electron microscope (SEM), nanofiber has very small nano diameter fiber size in average before the adsorption and larger scaled average diameter of fibers after adsorption which indicates that smaller diameter of nanofiber enhances the adsorption process. The adsorption capacity of peppermint oil increases as the initial concentration of peppermint oil and amount of polypropylene nanofiber used increases. The maximum adsorption capacity of polypropylene nanofiber was found to be 689.5 mg/g at (T= 30°C). Moreover, the adsorption capacity of peppermint oil decreases as the temperature of solution increases. The equilibrium data of polypropylene nanofiber is best represented by Freundlich isotherm with the maximum adsorption capacity of 689.5 mg/g. The adsorption kinetics of polypropylene nanofiber was best represented by pseudo-second order model.

Keywords: nanofiber, adsorption, peppermint essential oil, isotherms, adsorption kinetics

Procedia PDF Downloads 128
18852 Using AI for Analysing Political Leaders

Authors: Shuai Zhao, Shalendra D. Sharma, Jin Xu

Abstract:

This research uses advanced machine learning models to learn a number of hypotheses regarding political executives. Specifically, it analyses the impact these powerful leaders have on economic growth by using leaders’ data from the Archigos database from 1835 to the end of 2015. The data is processed by the AutoGluon, which was developed by Amazon. Automated Machine Learning (AutoML) and AutoGluon can automatically extract features from the data and then use multiple classifiers to train the data. Use a linear regression model and classification model to establish the relationship between leaders and economic growth (GDP per capita growth), and to clarify the relationship between their characteristics and economic growth from a machine learning perspective. Our work may show as a model or signal for collaboration between the fields of statistics and artificial intelligence (AI) that can light up the way for political researchers and economists.

Keywords: comparative politics, political executives, leaders’ characteristics, artificial intelligence

Procedia PDF Downloads 56
18851 Naïve Bayes: A Classical Approach for the Epileptic Seizures Recognition

Authors: Bhaveek Maini, Sanjay Dhanka, Surita Maini

Abstract:

Electroencephalography (EEG) is used to classify several epileptic seizures worldwide. It is a very crucial task for the neurologist to identify the epileptic seizure with manual EEG analysis, as it takes lots of effort and time. Human error is always at high risk in EEG, as acquiring signals needs manual intervention. Disease diagnosis using machine learning (ML) has continuously been explored since its inception. Moreover, where a large number of datasets have to be analyzed, ML is acting as a boon for doctors. In this research paper, authors proposed two different ML models, i.e., logistic regression (LR) and Naïve Bayes (NB), to predict epileptic seizures based on general parameters. These two techniques are applied to the epileptic seizures recognition dataset, available on the UCI ML repository. The algorithms are implemented on an 80:20 train test ratio (80% for training and 20% for testing), and the performance of the model was validated by 10-fold cross-validation. The proposed study has claimed accuracy of 81.87% and 95.49% for LR and NB, respectively.

Keywords: epileptic seizure recognition, logistic regression, Naïve Bayes, machine learning

Procedia PDF Downloads 38
18850 New Approach for Load Modeling

Authors: Slim Chokri

Abstract:

Load forecasting is one of the central functions in power systems operations. Electricity cannot be stored, which means that for electric utility, the estimate of the future demand is necessary in managing the production and purchasing in an economically reasonable way. A majority of the recently reported approaches are based on neural network. The attraction of the methods lies in the assumption that neural networks are able to learn properties of the load. However, the development of the methods is not finished, and the lack of comparative results on different model variations is a problem. This paper presents a new approach in order to predict the Tunisia daily peak load. The proposed method employs a computational intelligence scheme based on the Fuzzy neural network (FNN) and support vector regression (SVR). Experimental results obtained indicate that our proposed FNN-SVR technique gives significantly good prediction accuracy compared to some classical techniques.

Keywords: neural network, load forecasting, fuzzy inference, machine learning, fuzzy modeling and rule extraction, support vector regression

Procedia PDF Downloads 409
18849 Effect of Peppermint Essential Oil versus a Mixture of Formic and Propionic Acids on Corn Silage Volatile Fatty Acid Score

Authors: Mohsen Danesh Mesgaran, Ali Hodjatpanah Montazeri, Alireza Vakili, Mansoor Tahmasbei

Abstract:

To compare peppermint essential oil versus a mixture of formic and propionic acids a study was conducted to their effects on volatile fatty acid proportion and VFA score of corn silage. Chopped whole crop corn (control) was treated with peppermint essential oil (240 mg kg-1 DM) or a mixture of formic and propionic acids (2:1) at 0.4% of fresh forage weight, and ensiled for 30 days. Then, silage extract was provided and the concentration of each VFA was determined using gas chromatography. The VFA score was calculated according to the patented formula proposed by Dairy One Scientific Committee. The score is calculated based on the positive impact of lactic and acetic acids versus the negative effect of butyric acid to achieve a single value for evaluating silage quality. The essential oil declined pH and increased the concentration of lactic and acetic acids in the silage extract. All corn silages evaluated in this study had a VFA score between 6 through 8. However, silage with peppermint essential oils had lower volatile fatty acids score than those of the other treatments. Both of applied additives caused a significant improvement in silage aerobic stability.

Keywords: peppermint, essential oil, corn silage, VFA (volatile fatty acids)

Procedia PDF Downloads 364