Search results for: MSW quantity prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3196

Search results for: MSW quantity prediction

2236 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis

Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal

Abstract:

This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.

Keywords: e-wastes, Delhi, desktops, estimation

Procedia PDF Downloads 259
2235 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 95
2234 Potential of Safflower (Carthamus tinctorius L.) for Phytoremediation of Soils Contaminated with Heavy Metals

Authors: Violina R. Angelova, Vanja I. Akova, Stefan V. Krustev, Krasimir I. Ivanov

Abstract:

A field study was conducted to evaluate the efficacy of safflower plant for phytoremediation of contaminated soils. The experiment was performed on an agricultural fields contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. The concentrations of Pb, Zn and Cd in safflower (roots, stems, leaves and seeds), safflower oil and meal were determined. A correlation was found between the quantity of the mobile forms and the uptake of Pb, Zn and Cd by the safflower seeds. Safflower is a plant which is tolerant to heavy metals and can be grown on contaminated soils, and which can be referred to the hyperaccumulators of cadmium and the accumulators of lead and zinc, and can be successfully used in the phytoremediation of heavy metal contaminated soils. The processing of seeds to oil and using the obtained oil for nutritional purposes will greatly reduce the cost of phytoremediation. The possibility of further industrial processing will make safflower economically interesting crops for farmers of phytoremediation technology.

Keywords: heavy metals, phytoremediation, polluted soils, safflower

Procedia PDF Downloads 318
2233 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 84
2232 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 241
2231 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section

Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert

Abstract:

Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.

Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics

Procedia PDF Downloads 258
2230 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 76
2229 In vitro Synergistic Antioxidant Activity of Honey-Mentha Spicata Combination

Authors: Yuva Bellik, Selles Mohamed Amar

Abstract:

The beneficial health effects including antioxidant properties of mint (Mentha spicata) and honey bees (Apis mellifera) have been extensively studied. However, there is no data about the effects of their associated use. In this study the total phenolic and flavonoid contents for individual extracts of mint and honey and their combination were determined. The antioxidant activity was investigated by using reducing power, 1,1-diphenyl-2-picrylhydrazyl (DPPH), 2,2´- azinobis-(3-ethylbenzothiazoline-6-sulphonic acid diamonium salt (ABTS), and chelating power methods. The results showed that individual extracts contained important quantity of phenolics and flavonoids and their combination was found to produce best antioxidant activity. A significant linear correlation between the phenolic/flavonoid contents and antioxidant activity, especially with reducing power and free radical scavenging abilities, was observed.

Keywords: honey, mint, synergy, antioxidant activity

Procedia PDF Downloads 389
2228 Depth to Basement Determination Sculpting of a Magnetic Mineral Using Magnetic Survey

Authors: A. Ikusika, O. I. Poppola

Abstract:

This study was carried out to delineate possible structures that may favour the accumulation of tantalite, a magnetic mineral. A ground based technique was employed using proton precision magnetometer G-856 AX. A total of ten geophysical traverses were established in the study area. The acquired magnetic field data were corrected for drift. The trend analysis was adopted to remove the regional gradient from the observed data and the resulting results were presented as profiles. Quantitative interpretation only was adopted to obtain the depth to basement using Peter half slope method. From the geological setting of the area and the information obtained from the magnetic survey, a conclusion can be made that the study area is underlain by a rock unit of accumulated minerals. It is therefore suspected that the overburden is relatively thin within the study area and the metallic minerals are in disseminated quantity and at a shallow depth.

Keywords: basement, drift, magnetic field data, tantalite, traverses

Procedia PDF Downloads 475
2227 Multi-Layer Silica Alumina Membrane Performance for Flue Gas Separation

Authors: Ngozi Nwogu, Mohammed Kajama, Emmanuel Anyanwu, Edward Gobina

Abstract:

With the objective to create technologically advanced materials to be scientifically applicable, multi-layer silica alumina membranes were molecularly fabricated by continuous surface coating silica layers containing hybrid material onto a ceramic porous substrate for flue gas separation applications. The multi-layer silica alumina membrane was prepared by dip coating technique before further drying in an oven at elevated temperature. The effects of substrate physical appearance, coating quantity, the cross-linking agent, a number of coatings and testing conditions on the gas separation performance of the membrane have been investigated. Scanning electron microscope was used to investigate the development of coating thickness. The membrane shows impressive perm selectivity especially for CO2 and N2 binary mixture representing a stimulated flue gas stream

Keywords: gas separation, silica membrane, separation factor, membrane layer thickness

Procedia PDF Downloads 415
2226 Role of Osmoregulators for Enhancing Salinity Stress Tolerance in Chickpea

Authors: Mahmoud Ahmed Khater

Abstract:

This study aimed to improve the deleterious effects of salinity stress in chickpeas using both proline and glycine betaine as osmoregulants. The aim was achieved using foliar spraying with different concentrations of proline (5 mM and 10 mM) and glycinebetaine (10 mM and 20 mM) to chickpea plants grown in pots under salinity stress (3000 mg/l NaCl) at the greenhouse of the National Research Centre, Egypt, during two successive seasons 2021/2022 and 2022/2023. Results indicated that all applied treatments caused significant increases in most of the investigated parameters of chickpea plants irrigated with either tap water or saline solution relative to the corresponding control. It is worth mentioning that proline treatments were more effective than glycine betaine treatments in increasing the salinity tolerance of chickpea plants, reflected in their quality and quantity. Moreover, proline treatment at 5mM was the most pronounced treatment in alleviating the deleterious effect of salinity on chickpea plants.

Keywords: cicer arietinum L., osmoprotectant, proline, glycinebetaine salinity tolerance

Procedia PDF Downloads 48
2225 Development of Groundwater Management Model Using Groundwater Sustainability Index

Authors: S. S. Rwanga, J. M. Ndambuki, Y. Woyessa

Abstract:

Development of a groundwater management model is an important step in the exploitation and management of any groundwater aquifer as it assists in the long-term sustainable planning of the resource. The current study was conducted in Central Limpopo province of South Africa with the overall objective of determining how much water can be withdrawn from the aquifer without producing nonreversible impacts on the groundwater quantity, hence developing a model which can sustainably protect the aquifer. The development was done through the computation of Groundwater Sustainability Index (GSI). Values of GSI close to unity and above indicated overexploitation. In this study, an index of 0.8 was considered as overexploitation. The results indicated that there is potential for higher abstraction rates compared to the current abstraction rates. GSI approach can be used in the management of groundwater aquifer to sustainably develop the resource and also provides water managers and policy makers with fundamental information on where future water developments can be carried out.

Keywords: development, groundwater, groundwater sustainability index, model

Procedia PDF Downloads 170
2224 The Prospective Assessment of Zero-Energy Dwellings

Authors: Jovana Dj. Jovanovic, Svetlana M. Stevovic

Abstract:

The highest priority of so called, projected passive houses is to meet the appropriate energy demand. Every single material and layer which is injected into a dwelling has a certain energy quantity stored. The passive houses include optimized insulation levels with minimal thermal bridges, minimum of air leakage through the building, utilization of passive solar and internal gains, and good circulation of air which leans on mechanical ventilation system. The focus of this paper is on passive house features, benefits and targets, their feasibility and energy demands which are set up during each project. Numerous passive house-standards outline the very significant role of zero-energy dwellings towards the modern label of sustainable development. It is clear that the performance of both built and existing housing stock must be addressed if the population across the world sets out the energy objectives. This scientific article examines passive house features since the many passive house cases are launched.

Keywords: benefits, energy demands, passive houses, sustainable development

Procedia PDF Downloads 337
2223 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 583
2222 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 136
2221 Disruption Coordination of Supply Chain with Loss-Averse Retailer Under Buy-Back Contract

Authors: Yuan Tian, Benhe Gao

Abstract:

This paper aims to investigate a two stage supply chain of one leading supplier and one following retailer that experiences two factors perturbation out of supplier's production cost, retailer's marginal cost and retail price in stochastic demand environment. Granted that risk neutral condition has long been discussed, little attention has been given to disruptions under the premise of risk neutral supplier and risk aversion retailer. We establish the optimal order quantity and revealed the profit distribution coefficient in risk-neutral static model, make adjustment under disruption scenario, and then select utility function method for risk aversion model. Using buy-back contract policy, the improvement of parameters can achieve channel coordination where Pareto optimal is realized.

Keywords: supply chain coordination, disruption management, buy-back contract, lose aversion

Procedia PDF Downloads 327
2220 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 252
2219 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 133
2218 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
2217 Non-Singular Gravitational Collapse of a Dust Cloud in Einstein-Cartan Theory

Authors: Amir Hadi Ziaie, Mostafa Hashemi, Shahram Jalalzadeh

Abstract:

It is now known that the end state of the collapse process of a dense star under its own gravity is the formation of a spacetime singularity. This is the spacetime event where the energy density and spacetime curvature diverge, and the classical general relativity breaks down. As we know, a realistic star is composed of fermions so that their spin effects could alter the final fate of the collapse scenario. The underlying theory within which the inclusion of spin effects can be worked out is the Einstein-Cartan theory. In this theory, the spacetime torsion which is defined as a geometrical quantity, is related to an intrinsic angular momentum of fermions (spin). In this work, we study the collapse process of a homogeneous spin fluid in such a framework and show that taking into account the spin effects of the collapsing cloud could prevent the formation of spacetime singularity.

Keywords: gravitational collapse, einstein-cartan theory, spacetime singularity, black hole physics

Procedia PDF Downloads 398
2216 Designing an Intelligent Voltage Instability System in Power Distribution Systems in the Philippines Using IEEE 14 Bus Test System

Authors: Pocholo Rodriguez, Anne Bernadine Ocampo, Ian Benedict Chan, Janric Micah Gray

Abstract:

The state of an electric power system may be classified as either stable or unstable. The borderline of stability is at any condition for which a slight change in an unfavourable direction of any pertinent quantity will cause instability. Voltage instability in power distribution systems could lead to voltage collapse and thus power blackouts. The researchers will present an intelligent system using back propagation algorithm that can detect voltage instability and output voltage of a power distribution and classify it as stable or unstable. The researchers’ work is the use of parameters involved in voltage instability as input parameters to the neural network for training and testing purposes that can provide faster detection and monitoring of the power distribution system.

Keywords: back-propagation algorithm, load instability, neural network, power distribution system

Procedia PDF Downloads 435
2215 The Effects of Land Grabbing on Livelihood Assets and Its Implication on Food Production in Ghana: A Case Study of Bui Dam Construction Project

Authors: Charles Kwaku Oppong

Abstract:

This study examined the effects of the agricultural land grabbed for the Bui Dam project on the livelihoods assets of the affected people and its implication on food production. Both quantitative and qualitative data were collected through the use of focus group discussions, questionnaire administration, interview guide, and observations. It was found that the land grabbing incident in the study communities as a result of the Bui Dam construction has resulted in the improvements in the physical assets of the affected people. The findings also indicated that local food crop production and the quantity of fish catch have dwindled after the land grabs. Contrary to this, the local people’s access to the natural capital, particularly the local land for agricultural activities has been worsened. The study recommends alternative sustainable livelihood for the affected people by the local government.

Keywords: land grabbing, livelihood, asset, food production

Procedia PDF Downloads 166
2214 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation

Authors: Fidelia A. Orji, Julita Vassileva

Abstract:

This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.

Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning

Procedia PDF Downloads 128
2213 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 208
2212 Exergy Analysis of Reverse Osmosis for Potable Water and Land Irrigation

Authors: M. Sarai Atab, A. Smallbone, A. P. Roskilly

Abstract:

A thermodynamic study is performed on the Reverse Osmosis (RO) desalination process for brackish water. The detailed RO model of thermodynamics properties with and without an energy recovery device was built in Simulink/MATLAB and validated against reported measurement data. The efficiency of desalination plants can be estimated by both the first and second laws of thermodynamics. While the first law focuses on the quantity of energy, the second law analysis (i.e. exergy analysis) introduces quality. This paper used the Main Outfall Drain in Iraq as a case study to conduct energy and exergy analysis of RO process. The result shows that it is feasible to use energy recovery method for reverse osmosis with salinity less than 15000 ppm as the exergy efficiency increases twice. Moreover, this analysis shows that the highest exergy destruction occurs in the rejected water and lowest occurs in the permeate flow rate accounting 37% for 4.3% respectively.

Keywords: brackish water, exergy, irrigation, reverse osmosis (RO)

Procedia PDF Downloads 174
2211 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 384
2210 Polymorphism of Candidate Genes for Meat Production in Lori Sheep

Authors: Shahram Nanekarania, Majid Goodarzia

Abstract:

Calpastatin and callipyge have been known as one of the candidate genes in meat quality and quantity. Calpastatin gene has been located to chromosome 5 of sheep and callipyge gene has been localized in the telomeric region on ovine chromosome 18. The objective of this study was identification of calpastatin and callipyge genes polymorphism and analysis of genotype structure in population of Lori sheep kept in Iran. Blood samples were taken from 120 Lori sheep breed and genomic DNA was extracted by salting out method. Polymorphism was identified using the PCR-RFLP technique. The PCR products were digested with MspI and FaqI restriction enzymes for calpastatin gene and callipyge gene, respectively. In this population, three patterns were observed and AA, AB, BB genotype have been identified with the 0.32, 0.63, 0.05 frequencies for calpastatin gene. The results obtained for the callipyge gene revealed that only the wild-type allele A was observed, indicating that only genotype AA was present in the population under consideration.

Keywords: polymorphism, calpastatin, callipyge, PCR-RFLP, Lori sheep

Procedia PDF Downloads 612
2209 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 183
2208 Effect of Arsenic Treatment on Element Contents of Sunflower, Growing in Nutrient Solution

Authors: Szilvia Várallyay, Szilvia Veres, Éva Bódi, Farzaneh Garousi, Béla Kovács

Abstract:

The agricultural environment is contaminated with heavy metals and other toxic elements, which means more and more threats. One of the most important toxic element is the arsenic. Consequences of arsenic toxicity in the plant organism is decreases the weight of the roots, and causes discoloration and necrosis of leaves. The toxicity of arsenic depends on the quality and quantity of the arsenic specialization. The arsenic in the soil and in the plant presents as a most hazardous specialization. A dicotyledon plant were chosen for the experiment, namely sunflower. The sunflower plants were grown in nutrient solution in different As(III) levels. The content of As, P, Fe were measured from experimental plants, using by ICP-MS.Negative correlation was observed between the higher concentration of As(V) and As(III) in the nutrition solution and the content of P in the sunflower tissue. The amount of Fe was decreasing if we used a higher concentration of arsenic (30 mg kg-1). We can tell the conclusion that the arsenic had a negative effect on the sunflower tissue P and Fe content.

Keywords: arsenic, sunflower, ICP-MS, toxicity

Procedia PDF Downloads 647
2207 Abridging Pharmaceutical Analysis and Drug Discovery via LC-MS-TOF, NMR, in-silico Toxicity-Bioactivity Profiling for Therapeutic Purposing Zileuton Impurities: Need of Hour

Authors: Saurabh B. Ganorkar, Atul A. Shirkhedkar

Abstract:

The need for investigations protecting against toxic impurities though seems to be a primary requirement; the impurities which may prove non - toxic can be explored for their therapeutic potential if any to assist advanced drug discovery. The essential role of pharmaceutical analysis can thus be extended effectively to achieve it. The present study successfully achieved these objectives with characterization of major degradation products as impurities for Zileuton which has been used for to treat asthma since years. The forced degradation studies were performed to identify the potential degradation products using Ultra-fine Liquid-chromatography. Liquid-chromatography-Mass spectrometry (Time of Flight) and Proton Nuclear Magnetic Resonance Studies were utilized effectively to characterize the drug along with five major oxidative and hydrolytic degradation products (DP’s). The mass fragments were identified for Zileuton and path for the degradation was investigated. The characterized DP’s were subjected to In-Silico studies as XP Molecular Docking to compare the gain or loss in binding affinity with 5-Lipooxygenase enzyme. One of the impurity of was found to have the binding affinity more than the drug itself indicating for its potential to be more bioactive as better Antiasthmatic. The close structural resemblance has the ability to potentiate or reduce bioactivity and or toxicity. The chances of being active biologically at other sites cannot be denied and the same is achieved to some extent by predictions for probability of being active with Prediction of Activity Spectrum for Substances (PASS) The impurities found to be bio-active as Antineoplastic, Antiallergic, and inhibitors of Complement Factor D. The toxicological abilities as Ames-Mutagenicity, Carcinogenicity, Developmental Toxicity and Skin Irritancy were evaluated using Toxicity Prediction by Komputer Assisted Technology (TOPKAT). Two of the impurities were found to be non-toxic as compared to original drug Zileuton. As the drugs are purposed and repurposed effectively the impurities can also be; as they can have more binding affinity; less toxicity and better ability to be bio-active at other biological targets.

Keywords: UFLC, LC-MS-TOF, NMR, Zileuton, impurities, toxicity, bio-activity

Procedia PDF Downloads 195