Search results for: MSW quantity prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3195

Search results for: MSW quantity prediction

2385 Syngas From Polypropylene Gasification in a Fluidized Bed

Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo

Abstract:

In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.

Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle

Procedia PDF Downloads 27
2384 Temporal and Spatial Distribution Prediction of Patinopecten yessoensis Larvae in Northern China Yellow Sea

Authors: RuiJin Zhang, HengJiang Cai, JinSong Gui

Abstract:

It takes Patinopecten yessoensis larvae more than 20 days from spawning to settlement. Due to the natural environmental factors such as current, Patinopecten yessoensis larvae are transported to a distance more than hundreds of kilometers, leading to a high instability of their spatial and temporal distribution and great difficulties in the natural spat collection. Therefore predicting the distribution is of great significance to improve the operating efficiency of the collecting. Hydrodynamic model of Northern China Yellow Sea was established and the motions equations of physical oceanography and verified by the tidal harmonic constants and the measured data velocities of Dalian Bay. According to the passivity drift characteristics of the larvae, combined with the hydrodynamic model and the particle tracking model, the spatial and temporal distribution prediction model was established and the spatial and temporal distribution of the larvae under the influence of flow and wind were simulated. It can be concluded from the model results: ocean currents have greatest impacts on the passive drift path and diffusion of Patinopecten yessoensis larvae; the impact of wind is also important, which changed the direction and speed of the drift. Patinopecten yessoensis larvae were generated in the sea along Zhangzi Island and Guanglu-Dachangshan Island, but after two months, with the impact of wind and currents, the larvae appeared in the west of Dalian and the southern of Lvshun, and even in Bohai Bay. The model results are consistent with the relevant literature on qualitative analysis, and this conclusion explains where the larvae come from in the perspective of numerical simulation.

Keywords: numerical simulation, Patinopecten yessoensis larvae, predicting model, spatial and temporal distribution

Procedia PDF Downloads 304
2383 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method

Procedia PDF Downloads 239
2382 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási

Abstract:

For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).

Keywords: selenium, ICP-MS, food, food raw material

Procedia PDF Downloads 508
2381 Human Immune Response to Surgery: The Surrogate Prediction of Postoperative Outcomes

Authors: Husham Bayazed

Abstract:

Immune responses following surgical trauma play a pivotal role in predicting postoperative outcomes from healing and recovery to postoperative complications. Postoperative complications, including infections and protracted recovery, occur in a significant number of about 300 million surgeries performed annually worldwide. Complications cause personal suffering along with a significant economic burden on the healthcare system in any community. The accurate prediction of postoperative complications and patient-targeted interventions for their prevention remain major clinical provocations. Recent Findings: Recent studies are focusing on immune dysregulation mechanisms that occur in response to surgical trauma as a key determinant of postoperative complications. Antecedent studies mainly were plunging into the detection of inflammatory plasma markers, which facilitate in providing important clues regarding their pathogenesis. However, recent Single-cell technologies, such as mass cytometry or single-cell RNA sequencing, have markedly enhanced our ability to understand the immunological basis of postoperative immunological trauma complications and to identify their prognostic biological signatures. Summary: The advent of proteomic technologies has significantly advanced our ability to predict the risk of postoperative complications. Multiomic modeling of patients' immune states holds promise for the discovery of preoperative predictive biomarkers and providing patients and surgeons with information to improve surgical outcomes. However, more studies are required to accurately predict the risk of postoperative complications in individual patients.

Keywords: immune dysregulation, postoperative complications, surgical trauma, flow cytometry

Procedia PDF Downloads 86
2380 Studying the Temperature Field of Hypersonic Vehicle Structure with Aero-Thermo-Elasticity Deformation

Authors: Geng Xiangren, Liu Lei, Gui Ye-Wei, Tang Wei, Wang An-ling

Abstract:

The malfunction of thermal protection system (TPS) caused by aerodynamic heating is a latent trouble to aircraft structure safety. Accurately predicting the structure temperature field is quite important for the TPS design of hypersonic vehicle. Since Thornton’s work in 1988, the coupled method of aerodynamic heating and heat transfer has developed rapidly. However, little attention has been paid to the influence of structural deformation on aerodynamic heating and structural temperature field. In the flight, especially the long-endurance flight, the structural deformation, caused by the aerodynamic heating and temperature rise, has a direct impact on the aerodynamic heating and structural temperature field. Thus, the coupled interaction cannot be neglected. In this paper, based on the method of static aero-thermo-elasticity, considering the influence of aero-thermo-elasticity deformation, the aerodynamic heating and heat transfer coupled results of hypersonic vehicle wing model were calculated. The results show that, for the low-curvature region, such as fuselage or center-section wing, structure deformation has little effect on temperature field. However, for the stagnation region with high curvature, the coupled effect is not negligible. Thus, it is quite important for the structure temperature prediction to take into account the effect of elastic deformation. This work has laid a solid foundation for improving the prediction accuracy of the temperature distribution of aircraft structures and the evaluation capacity of structural performance.

Keywords: aerothermoelasticity, elastic deformation, structural temperature, multi-field coupling

Procedia PDF Downloads 341
2379 A Low Order Thermal Envelope Model for Heat Transfer Characteristics of Low-Rise Residential Buildings

Authors: Nadish Anand, Richard D. Gould

Abstract:

A simplistic model is introduced for determining the thermal characteristics of a Low-rise Residential (LRR) building and then predicts the energy usage by its Heating Ventilation & Air Conditioning (HVAC) system according to changes in weather conditions which are reflected in the Ambient Temperature (Outside Air Temperature). The LRR buildings are treated as a simple lump for solving the heat transfer problem and the model is derived using the lumped capacitance model of transient conduction heat transfer from bodies. Since most contemporary HVAC systems have a thermostat control which will have an offset temperature and user defined set point temperatures which define when the HVAC system will switch on and off. The aim is to predict without any error the Body Temperature (i.e. the Inside Air Temperature) which will estimate the switching on and off of the HVAC system. To validate the mathematical model derived from lumped capacitance we have used EnergyPlus simulation engine, which simulates Buildings with considerable accuracy. We have predicted through the low order model the Inside Air Temperature of a single house kept in three different climate zones (Detroit, Raleigh & Austin) and different orientations for summer and winter seasons. The prediction error from the model for the same day as that of model parameter calculation has showed an error of < 10% in winter for almost all the orientations and climate zones. Whereas the prediction error is only <10% for all the orientations in the summer season for climate zone at higher latitudes (Raleigh & Detroit). Possible factors responsible for the large variations are also noted in the work, paving way for future research.

Keywords: building energy, energy consumption, energy+, HVAC, low order model, lumped capacitance

Procedia PDF Downloads 266
2378 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment

Authors: Said Alshukri, Mazhar Hussain Malik

Abstract:

Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.

Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest

Procedia PDF Downloads 79
2377 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 322
2376 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran

Authors: Saba Gachpaz, Hamid Reza Heidari

Abstract:

The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. To achieve this, more resources should be consumed and, besides other environmental concerns, highlight sustainable agricultural development. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for ten different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.

Keywords: land suitability, machine learning, random forest, sustainable agriculture

Procedia PDF Downloads 84
2375 Enterprise Infrastructure Related to the Product Value Transferred from Intellectual Capital

Authors: Chih Chin Yang

Abstract:

The paper proposed a new theory of intellectual capital (so called IC) and a value approach in associated with production and market. After an in-depth review and research analysis of leading firms in this field, a holistic intellectual capital model is discussed, which involves transport, delivery supporting, and interface and systems of on intellectual capital. Through a quantity study, it is found that there is a significant relationship between the product value and infrastructure in a company. The product values are transferred from intellectual capital elements which includes three elements of content and the enterprise includes three elements of infrastructure in its market and product values of enterprise.

Keywords: enterprise, product value, intellectual capital, market and product values

Procedia PDF Downloads 392
2374 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor

Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa

Abstract:

Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.

Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS

Procedia PDF Downloads 181
2373 Organizational Efficiency in the Age of the Current Financial Crisis Strategies and Tracks Progress

Authors: Aharouay Soumaya

Abstract:

Efficiency is a relative concept. It is measured by comparing the productivity obtained in what is intended as standard or objective criteria. The quantity and quality of output achieved and the level of service are also compared to targets or standards, to determine to what extent they could cause changes in efficiency. Efficiency improves when more outputs of a specified quality are produced with the same resource inputs or less, or when the same amount of output is produced with fewer resources. This article proposes a review of the literature on strategies adopted by firms in the age of the financial crisis to overcome these negative effects, and tracks progress chosen by the organization to remain successful despite the plight of firms.

Keywords: effectiveness, efficiency, organizational capacity, strategy, management tool, progress, performance

Procedia PDF Downloads 346
2372 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 163
2371 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 49
2370 A Mathematical Model for Hepatitis B Virus Infection and the Impact of Vaccination on Its Dynamics

Authors: T. G. Kassem, A. K. Adunchezor, J. P. Chollom

Abstract:

This paper describes a mathematical model developed to predict the dynamics of Hepatitis B virus (HBV) infection and to evaluate the potential impact of vaccination and treatment on its dynamics. We used a compartmental model expressed by a set of differential equations based on the characteristic of HBV transmission. With these, we find the threshold quantity R0, then find the local asymptotic stability of disease free equilibrium and endemic equilibrium. Furthermore, we find the global stability of the disease free and endemic equilibrium.

Keywords: hepatitis B virus, epidemiology, vaccination, mathematical model

Procedia PDF Downloads 324
2369 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 103
2368 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction

Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour

Abstract:

In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.

Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration

Procedia PDF Downloads 154
2367 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 135
2366 Investigation of Preschool Children's Mathematics Concept Acquisition in Terms of Different Variables

Authors: Hilal Karakuş, Berrin Akman

Abstract:

Preschool years are considered as critical years because of shaping the future lives of individuals. All of the knowledge, skills, and concepts are acquired during this period. Also, basis of academic skills is based on this period. As all of the developmental areas are the fastest in that period, the basis of mathematics education should be given in this period, too. Mathematics is seen as a difficult and abstract course by the most people. Therefore, the enjoyable side of mathematics should be presented in a concrete way in this period to avoid any bias of children for mathematics. This study is conducted to examine mathematics concept acquisition of children in terms of different variables. Screening model is used in this study which is carried out in a quantity way. The study group of this research consists of total 300 children, selected from each class randomly in groups of five, who are from public and private preschools in Çankaya, which is district of Ankara, in 2014-2015 academic year and attending children in the nursery classes and preschool institutions are connected to the Ministry of National Education. The study group of the research was determined by stage sampling method. The schools, which formed study group, are chosen by easy sampling method and the children are chosen by simple random method. Research data were collected with Bracken Basic Concept Scale–Revised Form and Child’s Personal Information Form generated by the researcher in order to get information about children and their families. Bracken Basic Concept Scale-Revised Form consists of 11 sub-dimensions (color, letter, number, size, shape, comparison, direction-location, and quantity, individual and social awareness, building- material) and 307 items. Subtests related to the mathematics were used in this research. In the “Child Individual Information Form” there are items containing demographic information as followings: age of children, gender of children, attending preschools educational intuitions for children, school attendance, mother’s and father’s education levels. At the result of the study, while it was found that children’s mathematics skills differ from age, state of attending any preschool educational intuitions , time of attending any preschool educational intuitions, level of education of their mothers and their fathers; it was found that it does not differ by the gender and type of school they attend.

Keywords: preschool education, preschool period children, mathematics education, mathematics concept acquisitions

Procedia PDF Downloads 350
2365 Extent of Fruit and Vegetable Waste at Wholesaler Stage of the Food Supply Chain in Western Australia

Authors: P. Ghosh, S. B. Sharma

Abstract:

The growing problem of food waste is causing unacceptable economic, environmental and social impacts across the globe. In Australia, food waste is estimated at about AU$8 billion per year; however, information on the extent of wastage at different stages of the food value chain from farm to fork is very limited. This study aims to identify causes for and extent of food waste at wholesaler stage of the food value chain in the state of Western Australia. It also explores approaches applied to reduce and utilize food waste by the wholesalers. The study was carried out at Perth city market in Caning Vale, the main wholesale distribution centre for fruits and vegetables in Western Australia. A survey questionnaire was prepared and shared with 51 wholesalers and their responses to 10 targeted questions on quantity of produce (fruits and vegetables) delivery received and further supplied, reasons for waste generation and innovations applied or being considered to reduce and utilize food waste. Data were computed using the Statistical Package for the Social Sciences (SPSS version 21). Among the wholesalers 52% were primary wholesalers (buy produce directly from growers) and 48% were secondary wholesalers (buy produce in bulk from major wholesalers and supply to the local retail market, caterers, and customers with specific requirements). Average fruit and vegetable waste was 180 Kilogram per week per primary wholesaler and 30 Kilogram per secondary wholesaler. Based on this survey, the fruit and vegetable waste at wholesaler stage was estimated at about 286 tonnes per year. The secondary wholesalers distributed pre-ordered commodities, which minimized the potential to cause waste. Non-parametric test (Mann Whitney test) was carried out to assess contributions of wholesalers to waste generation. Over 56% of secondary wholesalers generally had nothing to bin as waste. Pearson’s correlation coefficient analysis showed positive correlation (r = 0.425; P=0.01) between the quantity of produce received and waste generated. Low market demand was the predominant reason identified by the wholesalers for waste generation. About a third of the wholesalers suggested that high cosmetic standards for fruits and vegetables - appearance, shape, and size - should be relaxed to reduce waste. Donation of unutilized fruits and vegetables to charity was overwhelmingly (95%) considered as one of the best options for utilization of discarded produce. The extent of waste at other stages of fruit and vegetable supply chain is currently being studied.

Keywords: food waste, fruits and vegetables, supply chain, waste generation

Procedia PDF Downloads 312
2364 Hydrodynamics Study on Planing Hull with and without Step Using Numerical Solution

Authors: Koe Han Beng, Khoo Boo Cheong

Abstract:

The rising interest of stepped hull design has been led by the demand of more efficient high-speed boat. At the same time, the need of accurate prediction method for stepped planing hull is getting more important. By understanding the flow at high Froude number is the key in designing a practical step hull, the study surrounding stepped hull has been done mainly in the towing tank which is time-consuming and costly for initial design phase. Here the feasibility of predicting hydrodynamics of high-speed planing hull both with and without step using computational fluid dynamics (CFD) with the volume of fluid (VOF) methodology is studied in this work. First the flow around the prismatic body is analyzed, the force generated and its center of pressure are compared with available experimental and empirical data from the literature. The wake behind the transom on the keel line as well as the quarter beam buttock line are then compared with the available data, this is important since the afterbody flow of stepped hull is subjected from the wake of the forebody. Finally the calm water performance prediction of a conventional planing hull and its stepped version is then analyzed. Overset mesh methodology is employed in solving the dynamic equilibrium of the hull. The resistance, trim, and heave are then compared with the experimental data. The resistance is found to be predicted well and the dynamic equilibrium solved by the numerical method is deemed to be acceptable. This means that computational fluid dynamics will be very useful in further study on the complex flow around stepped hull and its potential usage in the design phase.

Keywords: planing hulls, stepped hulls, wake shape, numerical simulation, hydrodynamics

Procedia PDF Downloads 282
2363 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast

Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi

Abstract:

Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.

Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature

Procedia PDF Downloads 280
2362 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 88
2361 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 134
2360 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 386
2359 Biochemical Characterization of Meat Goat in Algeria

Authors: Hafid Nadia, Meziane Toufik

Abstract:

The aim of this study was the characterization of the goat meat by the determination of quantity and the quality in Batna region. The first part was the evaluation of production and consumption. The investigations show that the goat meat third after mutton and beef, it’s especially consumed by the indigenous population located in the Mountain and steep area. The second part of this review treats nutritional quality of this meat by the quantification of the chemical composition, including fat profile, and establishes a link between animal age and the values of these parameters. Moisture, fat contents, and cholesterol levels varied with age. Because of the decreasing level of cholesterol in the Chevon meat, it is more recommended for consumption to prevent or reduce the incidence of coronary disease and heart attack.

Keywords: biochemical composition, cholesterol, goat meat, heart attack

Procedia PDF Downloads 669
2358 Phytoremediation of Zn-Contaminated Soils by Malva Sylvestris

Authors: Abdelouahab Diafat, Meribai Abdelmalek, Ahmed Bahloul

Abstract:

phytoremediation is the use of plants to remove or degrade organic or inorganic contaminants from soil and water this work aims to study the potential effect of malva sylvestris for the phytoremediation of soils contaminated by Zn. plants were grown in pots containing soil artificially contaminated with Zn at concentrations of 100, 200, and 300 mg/kg. the results obtained show that the Zn concentrations used have a negative effect on the growth of this plant the search for the metal carried out by the technique of atomic absorption spectrometry shows that this plant accumulates a small quantity of this metal. it can be concluded that the malva sylvestris plant tolerates Zn contaminated soils but it is not considered as a zinc hyperaccumulator plant

Keywords: phytoremidiation, Zn-contaminated soils, Malva Sylvestris, phytoextraction

Procedia PDF Downloads 86
2357 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets

Authors: Cristian Pauna

Abstract:

Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network

Procedia PDF Downloads 160
2356 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 234