Search results for: mixture regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19236

Search results for: mixture regression model

18966 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 305
18965 Evaluation of Toxicity of Some Fungicides Against the Pathogen Fusarium sp.

Authors: M. Djekoun, H. Berrebah, M. R. Djebar

Abstract:

Fusarium wilt attacks the plants of major economic interest including wheat. This disease causes many problems for farmers and economic loss resulting are often very heavy. Chemical control is currently one of the most effective ways to fight against these diseases. In this study, the efficacy of three fungicides (tebuconazole, thiram and fludioxonil - difenoconazole mixture) was tested, in vitro, on the phytopathogenic Fusarium sp. isolated from seeds of wheat. The active ingredients were tested at different concentrations: 0.06, 1.39, 2.79, 5.58, and 11.16 mg/l for tebuconazole, 0.035, 0.052, 0.105, 0.21, and 0.42 mg/l for thiram and finally, for the mixture fludioxonil- difenoconazole 4 concentrations were tested : 0.05, 0.1, 0.5, and 1 mg/l. Toxicity responses were expressed as the effective concentration, which inhibits mycelial growth by 50%, (EC50). Of the three selected fungicides, thirame proved to be the most effective with EC50 value of the order of 0,15 mg/l followed by the mixture of fludioxonil- difenoconazole with 0,27 mg/l and finally tebuconazole with a value of 3.79 mg/l.

Keywords: Fusarium sp, thiram, tebuconazole, fludioxonil, difenoconazole, EC50

Procedia PDF Downloads 513
18964 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 108
18963 Assessing the Impacts of Urbanization on Urban Precincts: A Case of Golconda Precinct, Hyderabad

Authors: Sai AKhila Budaraju

Abstract:

Heritage sites are an integral part of cities and carry a sense of identity to the cities/ towns, but the process of urbanization is a carrying potential threat for the loss of these heritage sites/monuments. Both Central and State Governments listed the historic Golconda fort as National Important Monument and the Heritage precinct with eight heritage-listed buildings and two historical sites respectively, for conservation and preservation, due to the presence of IT Corridor 6kms away accommodating more people in the precinct is under constant pressure. The heritage precinct possesses high property values, being a prime location connecting the IT corridor and CBD (central business district )areas. The primary objective of the study was to assess and identify the factors that are affecting the heritage precinct through Mapping and documentation, Identifying and assessing the factors through empirical analysis, Ordinal regression analysis and Hedonic Pricing Model. Ordinal regression analysis was used to identify the factors that contribute to the changes in the precinct due to urbanization. Hedonic Pricing Model was used to understand and establish a relation whether the presence of historical monuments is also a contributing factor to the property value and to what extent this influence can contribute. The above methods and field visit indicates the Physical, socio-economic factors and the neighborhood characteristics of the precinct contributing to the property values. The outturns and the potential elements derived from the analysis of the Development Control Rules were derived as recommendations to Integrate both Old and newly built environments.

Keywords: heritage planning, heritage conservation, hedonic pricing model, ordinal regression analysis

Procedia PDF Downloads 162
18962 Dielectric Recovery Characteristics of High Voltage Gas Circuit Breakers Operating with CO₂ Mixture

Authors: Peng Lu, Branimir Radisavljevic, Martin Seeger, Daniel Over, Torsten Votteler, Bernardo Galletti

Abstract:

CO₂-based gas mixtures exhibit huge potential as the interruption medium for replacing SF₆ in high voltage switchgears. In this paper, the recovery characteristics of dielectric strength of CO₂-O₂ mixture in the post arc phase after the current zero are presented. As representative examples, the dielectric recovery curves under conditions of different gas filling pressures and short-circuit current amplitudes are presented. A series of dielectric recovery measurements suggests that the dielectric recovery rate is proportional to the mass flux of the blowing gas, and the dielectric strength recovers faster in the case of lower short circuit currents.

Keywords: CO₂ mixture, high voltage circuit breakers, dielectric recovery rate, short-circuit current, mass flux

Procedia PDF Downloads 165
18961 Estimation of Missing Values in Aggregate Level Spatial Data

Authors: Amitha Puranik, V. S. Binu, Seena Biju

Abstract:

Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.

Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis

Procedia PDF Downloads 343
18960 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 9
18959 Weighted Rank Regression with Adaptive Penalty Function

Authors: Kang-Mo Jung

Abstract:

The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.

Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression

Procedia PDF Downloads 429
18958 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 113
18957 Effects of Some Fungicides on Mycelial Growth of Fusarium spp.

Authors: M. Djekoun, H. Berrebah, M. R. Djebar

Abstract:

Fusarium wilt is destructive disease of cereal crops with small grains. It affects yields but also the quality of the crop and economic losses arising are often very heavy. Chemical control is currently one of the most effective ways to fight against these diseases. In this study, the efficacy of three fungicides (tebuconazole, thiram, and fludioxonil-difenoconazole mixture) was tested. In vitro, on the phytopathogenic Fusarium spp. isolated from seeds of wheat. The active ingredients were tested at different concentrations: 0.06, 1.39, 2.79, 5.58, and 11.16 mg/l for tebuconazole, 0.035, 0.052, 0.105, 0.21, and 0.42 mg/l for thiram and finally, for the mixture fludioxonil-difenoconazole 4 concentrations were tested: 0.05, 0.1, 0.5 and 1 mg/l. Toxicity responses were expressed as effective concentration, which inhibits mycelial growth by 50%, (EC50). Of the three selected fungicides, thirame proved to be the most effective with EC50 value of the order of 0,15 mg/l followed by the mixture of fludioxonil-difenoconazole with 0,27mg/l and finally tebuconazole with a value of 3.79 mg/l.

Keywords: Fusarium spp., thiram, tebuconazole, fludioxonil, difenoconazole, percentage of inhibition, EC50

Procedia PDF Downloads 334
18956 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 335
18955 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model

Procedia PDF Downloads 381
18954 Inactivation Kinetics of DNA and RNA Viruses by Ozone-Air Mixture in a Flow Mixer

Authors: Nikolai Nosik, Vladislav Podmasterjev, Nina Kondrashina, Marina Chataeva, Olga Lobach, Dmitry Noosik, Sergei Razumovskii

Abstract:

Virucidal activity of ozone is well known: dissolved in water it kill viruses very fast. The virucidal capacity of ozone in ozone-air mixture is less known. The goal of the study was to investigate the virucidal potentials of the ozone–air mixture and kinetics of virus inactivation. Materials and methods. Ozone (O3 ) was generated from oxygen with ozonizer ( 1.0 – 75.0 mg\l). The ozone concentration was determined by the spectrophotometric methods. Virus contaminated samples were placed into the flowing reactor. Viruses: poliovirus type 1, vaccine strain (Sabin) and adenovirus, type 5, were obtained from the State virus collection. Titrations of viruses were carried out in appropriate cell cultures. CxT value ( mg\l x min) was calculated. Results. Metallic, polycarbonic and fiber “Kevlar” samples were contaminated with virus, dried and treated with ozone-air mixture in the flowing reactor. Kinetics of poliovirus inactivation: in 15 min at 5.0 mg\l -2.0 lg TCID50 inhibition , in 15 min at 10 mg\l – 2.5 lg TCID50 , 4.0 lg TCID50 inactivation of poliovirus was achieved after 75min at ozone concentration 20.0mg\l (99.99%). ( CxT = 75, 150 and 1500 mg\l x min on all three types of surfaces). It was found that the inactivation of poliovirus was more effective when the virus contaminated samples were wet (in 15 min at 20mg\l inhibition of virus in dry samples was 2.0 TCID50 , in wet samples – 4.0 TCID50). Adenovirus was less resistant to ozone treatment then poliovirus: 4.0 lg TCID50 inhibition was observed after 30 min of the treatment with ozone at 20mg\l ( CxT mg\l x min = 300 for adenovirus as for poliovirus it was 1500). Conclusion. It was found that ozone-air mixture inactivates viruses at rather high concentrations (compared to the reported effect of ozone dissolved in water). Despite of that there is a difference in the resistance to ozone action between viruses – poliovirus is more resistant then adenovirus-ozone-air mixture can be used for disinfection of large rooms. The maintaining of the virus-contaminated surfaces in wet condition allow to decrease the ozone load for virus inactivation.

Keywords: adenovirus, disinfection, ozone, poliovirus

Procedia PDF Downloads 318
18953 Influence of the Adsorption of Anionic–Nonionic Surfactants/Silica Nanoparticles Mixture on Clay Rock Minerals in Chemical Enhanced Oil Recovery

Authors: C. Mendoza Ramírez, M. Gambús Ordaz, R. Mercado Ojeda.

Abstract:

Chemical solutions flooding with surfactants, based on their property of reducing the interfacial tension between crude oil and water, is a potential application of chemical enhanced oil recovery (CEOR), however, the high-rate retention of surfactants associated with adsorption in the porous medium and the complexity of the mineralogical composition of the reservoir rock generates a limitation in the efficiency of displacement of crude oil. This study evaluates the effect of the concentration of a mixture of anionic-non-ionic surfactants with silica nanoparticles, in a rock sample composed of 25.14% clay minerals of the kaolinite, chlorite, halloysite and montmorillonite type, according to the results of X-Ray Diffraction analysis and Scanning Electron Spectrometry (XRD and SEM, respectively). The amount of the surfactant mixture adsorbed on the clay rock minerals was analyzed from the construction of its calibration curve and the 4-Region Isotherm Model in a UV-Visible spectroscopy. The adsorption rate of the surfactant in the clay rock averages 32% across all concentrations, influenced by the presence of the surface area of the substrate with a value of 1.6 m2/g and by the mineralogical composition of the clay that increases the cation exchange capacity (CEC). In addition, on Region I and II a final concentration measurement is not evident in the UV-VIS, due to its ionic nature, its high affinity with the clay rock and its low concentration. Finally, for potential CEOR applications, the adsorption of these mixed surfactant systems is considered due to their industrial relevance and it is concluded that it is possible to use concentrations in Region III and IV; initially the adsorption has an increasing slope and then reaches zero in the equilibrium where interfacial tension values are reached in the order of x10-1 mN/m.

Keywords: anionic–nonionic surfactants, clay rock, adsorption, 4-region isotherm model, cation exchange capacity, critical micelle concentration, enhanced oil recovery

Procedia PDF Downloads 43
18952 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: meta-modal, objective function, steel frames, seismic analysis, design

Procedia PDF Downloads 216
18951 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 433
18950 Breast Cancer Detection Using Machine Learning Algorithms

Authors: Jiwan Kumar, Pooja, Sandeep Negi, Anjum Rouf, Amit Kumar, Naveen Lakra

Abstract:

In modern times where, health issues are increasing day by day, breast cancer is also one of them, which is very crucial and really important to find in the early stages. Doctors can use this model in order to tell their patients whether a cancer is not harmful (benign) or harmful (malignant). We have used the knowledge of machine learning in order to produce the model. we have used algorithms like Logistic Regression, Random forest, support Vector Classifier, Bayesian Network and Radial Basis Function. We tried to use the data of crucial parts and show them the results in pictures in order to make it easier for doctors. By doing this, we're making ML better at finding breast cancer, which can lead to saving more lives and better health care.

Keywords: Bayesian network, radial basis function, ensemble learning, understandable, data making better, random forest, logistic regression, breast cancer

Procedia PDF Downloads 5
18949 Method of Estimating Absolute Entropy of Municipal Solid Waste

Authors: Francis Chinweuba Eboh, Peter Ahlström, Tobias Richards

Abstract:

Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl (kJ.K-1.kg) and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3%  C 95.1%, 0.0%  H  14.3%, 0.0%  O  71.1%, 0.0  N  66.7%, 0.0%  S  42.1%, 0.0%  Cl  89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.

Keywords: absolute entropy, irreversibility, municipal solid waste, waste-to-energy

Procedia PDF Downloads 279
18948 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 244
18947 Mixed Alumina-Silicate Materials for Groundwater Remediation

Authors: Ziyad Abunada, Abir Al-tabbaa

Abstract:

The current work is investigating the effectiveness of combined mixed materials mainly modified bentonites and organoclay in treating contaminated groundwater. Sodium bentonite was manufactured with a quaternary amine surfactant, dimethyl ammonium chloride to produce organoclay (OC). Inorgano-organo bentonite (IOB) was produced by intercalating alkylbenzyd-methyl-ammonium chloride surfactant into sodium bentonite and pillared with chlorohydrol pillaring agent. The materials efficiency was tested for both TEX compounds from model-contaminated water and a mixture of organic contaminants found in groundwater samples collected from a contaminated site in the United Kingdom. The sorption data was fitted well to both Langmuir and Freundlich adsorption models reflecting the double sorption model where the correlation coefficient was greater than 0.89 for all materials. The mixed materials showed higher sorptive capacity than individual material with a preference order of X> E> T and a maximum sorptive capacity of 21.8 mg/g was reported for IOB-OC materials for o-xylene. The mixed materials showed at least two times higher affinity towards a mixture of organic contaminants in groundwater samples. Other experimental parameters such as pH and contact time were also investigated. The pseudo-second-order rate equation was able to provide the best description of adsorption kinetics.

Keywords: modified bentobite, groundwater, adsorption, contaminats

Procedia PDF Downloads 197
18946 The Impact of Public Open Space System on Housing Price in Chicago

Authors: Si Chen, Le Zhang, Xian He

Abstract:

The research explored the influences of public open space system on housing price through hedonic models, in order to support better open space plans and economic policies. We have three initial hypotheses: 1) public open space system has an overall positive influence on surrounding housing prices. 2) Different public open space types have different levels of influence on motivating surrounding housing prices. 3) Walking and driving accessibilities from property to public open spaces have different statistical relation with housing prices. Cook County, Illinois, was chosen to be a study area since data availability, sufficient open space types, and long-term open space preservation strategies. We considered the housing attributes, driving and walking accessibility scores from houses to nearby public open spaces, and driving accessibility scores to hospitals as influential features and used real housing sales price in 2010 as a dependent variable in the built hedonic model. Through ordinary least squares (OLS) regression analysis, General Moran’s I analysis and geographically weighted regression analysis, we observed the statistical relations between public open spaces and housing sale prices in the three built hedonic models and confirmed all three hypotheses.

Keywords: hedonic model, public open space, housing sale price, regression analysis, accessibility score

Procedia PDF Downloads 98
18945 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 251
18944 [Keynote Talk]: Photocatalytic Cleaning Performance of Air Filters for a Binary Mixture

Authors: Lexuan Zhong, Chang-Seo Lee, Fariborz Haghighat, Stuart Batterman, John C. Little

Abstract:

Ultraviolet photocatalytic oxidation (UV-PCO) technology has been recommended as a green approach to health indoor environment when it is integrated into mechanical ventilation systems for inorganic and organic compounds removal as well as energy saving due to less outdoor air intakes. Although much research has been devoted to UV-PCO, limited information is available on the UV-PCO behavior tested by the mixtures in literature. This project investigated UV-PCO performance and by-product generation using a single and a mixture of acetone and MEK at 100 ppb each in a single-pass duct system in an effort to obtain knowledge associated with competitive photochemical reactions involved in. The experiments were performed at 20 % RH, 22 °C, and a gas flow rate of 128 m3/h (75 cfm). Results show that acetone and MEK mutually reduced each other’s PCO removal efficiency, particularly negative removal efficiency for acetone. These findings were different from previous observation of facilitatory effects on the adsorption of acetone and MEK on photocatalyst surfaces.

Keywords: by-products, inhibitory effect, mixture, photocatalytic oxidation

Procedia PDF Downloads 471
18943 Effects of Supplementation with Annatto (Bixa Orellana)-Derived δ-Tocotrienol on the Nicotine-Induced Reduction in Body Weight and 8-Cell Preimplantation Embryonic Development in Mice

Authors: M. H. Rajikin, S. M. M. Syairah, A. R. Sharaniza

Abstract:

Effects of nicotine on pre-partum body weight and preimplantation embryonic development has been reported previously. Present study was conducted to determine the effects of annatto (Bixa orellana)-derived delta-tocotrienol (TCT) (with presence of 10% gamma-TCT isomer) on the nicotine-induced reduction in body weight and 8-cell embryonic growth in mice. Twenty four 6-8 weeks old (23-25g) female balb/c mice were randomly divided into four groups (G1-G4; n=6). Those groups were subjected to the following treatments for 7 consecutive days: G1 (control) were gavaged with 0.1 ml tocopherol stripped corn oil, G2 was subcutaneously (s.c.) injected with 3 mg/kg/day of nicotine, G3 received concurrent treatment of nicotine (3 mg/kg/day) and 60 mg/kg/day of δ-TCT mixture (contains 90% delta & 10% gamma isomers) and G4 was given 60 mg/kg/day of δ-TCT mixture alone. Body weights were recorded daily during the treatment. On Day 8, females were superovulated with 5 IU Pregnant Mare’s Serum Gonadotropin (PMSG) for 48 hours followed with 5 IU human Chorionic Gonadotropin (hCG) before mated with males at the ratio of 1:1. Females were sacrificed by cervical dislocation for embryo collection 48 hours post-coitum. Collected embryos were cultured in vitro. Results showed that throughout Day 1 to Day 7, the body weight of nicotine treated group (G2) was significantly lower (p<0.05) than that of G1, G3 and G4. Intervention with δ-TCT mixture (G3) managed to increase the body weight close to the control group. This is also observed in the group treated with δ-TCT mixture alone (G4). The development of 8-cell embryos following in vitro culture (IVC) was totally inhibited in G2. Intervention with δ-TCT mixture (G3) resulted in the production of 8-cell embryos, although it was not up to that of the control group. Treatment with δ-TCT mixture alone (G4) caused significant increase in the average number of produced 8-cell embryo compared to G1. Present data indicated that δ-TCT mixture was able to reverse the body weight loss in nicotine treated mice and the development of 8-cell embryos was also improved.

Keywords: δ-tocotrienol, body weight, nicotine, preimplantation embryonic development

Procedia PDF Downloads 325
18942 Interference among Lambsquarters and Oil Rapeseed Cultivars

Authors: Reza Siyami, Bahram Mirshekari

Abstract:

Seed and oil yield of rapeseed is considerably affected by weeds interference including mustard (Sinapis arvensis L.), lambsquarters (Chenopodium album L.) and redroot pigweed (Amaranthus retroflexus L.) throughout the East Azerbaijan province in Iran. To formulate the relationship between four independent growth variables measured in our experiment with a dependent variable, multiple regression analysis was carried out for the weed leaves number per plant (X1), green cover percentage (X2), LAI (X3) and leaf area per plant (X4) as independent variables and rapeseed oil yield as a dependent variable. The multiple regression equation is shown as follows: Seed essential oil yield (kg/ha) = 0.156 + 0.0325 (X1) + 0.0489 (X2) + 0.0415 (X3) + 0.133 (X4). Furthermore, the stepwise regression analysis was also carried out for the data obtained to test the significance of the independent variables affecting the oil yield as a dependent variable. The resulted stepwise regression equation is shown as follows: Oil yield = 4.42 + 0.0841 (X2) + 0.0801 (X3); R2 = 81.5. The stepwise regression analysis verified that the green cover percentage and LAI of weed had a marked increasing effect on the oil yield of rapeseed.

Keywords: green cover percentage, independent variable, interference, regression

Procedia PDF Downloads 387
18941 A Case Comparative Study of Infant Mortality Rate in North-West Nigeria

Authors: G. I. Onwuka, A. Danbaba, S. U. Gulumbe

Abstract:

This study investigated of Infant Mortality Rate as observed at a general hospital in Kaduna-South, Kaduna State, North West Nigeria. The causes of infant Mortality were examined. The data used for this analysis were collected at the statistics unit of the Hospital. The analysis was carried out on the data using Multiple Linear regression Technique and this showed that there is linear relationship between the dependent variable (death) and the independent variables (malaria, measles, anaemia, and coronary heart disease). The resultant model also revealed that a unit increment in each of these diseases would result to a unit increment in death recorded, 98.7% of the total variation in mortality is explained by the given model. The highest number of mortality was recorded in July, 2005 and the lowest mortality recorded in October, 2009.Recommendations were however made based on the results of the study.

Keywords: infant mortality rate, multiple linear regression, diseases, serial correlation

Procedia PDF Downloads 298
18940 Artificial Neural Network and Statistical Method

Authors: Tomas Berhanu Bekele

Abstract:

Traffic congestion is one of the main problems related to transportation in developed as well as developing countries. Traffic control systems are based on the idea of avoiding traffic instabilities and homogenizing traffic flow in such a way that the risk of accidents is minimized and traffic flow is maximized. Lately, Intelligent Transport Systems (ITS) has become an important area of research to solve such road traffic-related issues for making smart decisions. It links people, roads and vehicles together using communication technologies to increase safety and mobility. Moreover, accurate prediction of road traffic is important to manage traffic congestion. The aim of this study is to develop an ANN model for the prediction of traffic flow and to compare the ANN model with the linear regression model of traffic flow predictions. Data extraction was carried out in intervals of 15 minutes from the video player. Video of mixed traffic flow was taken and then counted during office work in order to determine the traffic volume. Vehicles were classified into six categories, namely Car, Motorcycle, Minibus, mid-bus, Bus, and Truck vehicles. The average time taken by each vehicle type to travel the trap length was measured by time displayed on a video screen.

Keywords: intelligent transport system (ITS), traffic flow prediction, artificial neural network (ANN), linear regression

Procedia PDF Downloads 17
18939 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 370
18938 Microwave Dielectric Relaxation Study of Diethanolamine with Triethanolamine from 10 MHz-20 GHz

Authors: A. V. Patil

Abstract:

The microwave dielectric relaxation study of diethanolamine with triethanolamine binary mixture have been determined over the frequency range of 10 MHz to 20 GHz, at various temperatures using time domain reflectometry (TDR) method for 11 concentrations of the system. The present work reveals molecular interaction between same multi-functional groups [−OH and –NH2] of the alkanolamines (diethanolamine and triethanolamine) using different models such as Debye model, Excess model, and Kirkwood model. The dielectric parameters viz. static dielectric constant (ε0) and relaxation time (τ) have been obtained with Debye equation characterized by a single relaxation time without relaxation time distribution by the least squares fit method.

Keywords: diethanolamine, excess properties, kirkwood properties, time domain reflectometry, triethanolamine

Procedia PDF Downloads 271
18937 Adaptive Neuro Fuzzy Inference System Model Based on Support Vector Regression for Stock Time Series Forecasting

Authors: Anita Setianingrum, Oki S. Jaya, Zuherman Rustam

Abstract:

Forecasting stock price is a challenging task due to the complex time series of the data. The complexity arises from many variables that affect the stock market. Many time series models have been proposed before, but those previous models still have some problems: 1) put the subjectivity of choosing the technical indicators, and 2) rely upon some assumptions about the variables, so it is limited to be applied to all datasets. Therefore, this paper studied a novel Adaptive Neuro-Fuzzy Inference System (ANFIS) time series model based on Support Vector Regression (SVR) for forecasting the stock market. In order to evaluate the performance of proposed models, stock market transaction data of TAIEX and HIS from January to December 2015 is collected as experimental datasets. As a result, the method has outperformed its counterparts in terms of accuracy.

Keywords: ANFIS, fuzzy time series, stock forecasting, SVR

Procedia PDF Downloads 214