Search results for: predictive analytics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1278

Search results for: predictive analytics

768 Analysis of Green Wood Preservation Chemicals

Authors: Aitor Barbero-López, Soumaya Chibily, Gerhard Scheepers, Thomas Grahn, Martti Venäläinen, Antti Haapala

Abstract:

Wood decay is addressed continuously within the wood industry through use and development of wood preservatives. The increasing awareness on the negative effects of many chemicals towards the environment is causing political restrictions in their use and creating more urgent need for research on green alternatives. This paper discusses some of the possible natural extracts for wood preserving applications and compares the analytical methods available for testing their behavior and efficiency against decay fungi. The results indicate that natural extracts have interesting chemical constituents that delay fungal growth but vary in efficiency depending on the chemical concentration and substrate used. Results also suggest that presence and redistribution of preservatives in wood during exposure trials can be assessed by spectral imaging methods although standardized methods are not available. This study concludes that, in addition to the many standard methods available, there is a need to develop new faster methods for screening potential preservative formulation while maintaining the comparability and relevance of results.

Keywords: analytics, methods, preservatives, wood decay

Procedia PDF Downloads 224
767 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 142
766 Is School Misbehavior a Decision: Implications for School Guidance

Authors: Rachel C. F. Sun

Abstract:

This study examined the predictive effects of moral competence, prosocial norms and positive behavior recognition on school misbehavior among Chinese junior secondary school students. Results of multiple regression analysis showed that students were more likely to misbehave in school when they had lower levels of moral competence and prosocial norms, and when they perceived their positive behavior being less likely recognized. Practical implications were discussed on how to guide students to make the right choices to behave appropriately in school. Implications for future research were also discussed.

Keywords: moral competence, positive behavior recognition, prosocial norms, school misbehavior

Procedia PDF Downloads 378
765 Thermal Effect in Power Electrical for HEMTs Devices with InAlN/GaN

Authors: Zakarya Kourdi, Mohammed Khaouani, Benyounes Bouazza, Ahlam Guen-Bouazza, Amine Boursali

Abstract:

In this paper, we have evaluated the thermal effect for high electron mobility transistors (HEMTs) heterostructure InAlN/GaN with a gate length 30nm high-performance. It also shows the analysis and simulated these devices, and how can be used in different application. The simulator Tcad-Silvaco software has used for predictive results good for the DC, AC and RF characteristic, Devices offered max drain current 0.67A; transconductance is 720 mS/mm the unilateral power gain of 180 dB. A cutoff frequency of 385 GHz, and max frequency 810 GHz These results confirm the feasibility of using HEMTs with InAlN/GaN in high power amplifiers, as well as thermal places.

Keywords: HEMT, Thermal Effect, Silvaco, InAlN/GaN

Procedia PDF Downloads 463
764 Monomial Form Approach to Rectangular Surface Modeling

Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong

Abstract:

Geometric modeling plays an important role in the constructions and manufacturing of curve, surface and solid modeling. Their algorithms are critically important not only in the automobile, ship and aircraft manufacturing business, but are also absolutely necessary in a wide variety of modern applications, e.g., robotics, optimization, computer vision, data analytics and visualization. The calculation and display of geometric objects can be accomplished by these six techniques: Polynomial basis, Recursive, Iterative, Coefficient matrix, Polar form approach and Pyramidal algorithms. In this research, the coefficient matrix (simply called monomial form approach) will be used to model polynomial rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and NB1 surfaces. Some examples of the monomial forms for these surface modeling are illustrated in many aspects, e.g., construction, derivatives, model transformation, degree elevation and degress reduction.

Keywords: monomial forms, rectangular surfaces, CAGD curves, monomial matrix applications

Procedia PDF Downloads 144
763 Expression of uPA, tPA, and PAI-1 in Calcified Aortic Valves

Authors: Abdullah M. Alzahrani

Abstract:

Our physiopathological assumption is that u-PA, t-PA, and PAI-1 are released by calcified aortic valves and play a role in the calcification of these valves. Sixty-five calcified aortic valves were collected from patients suffering from aortic stenosis. Each valve was incubated for 24 hours in culture medium. The supernatants were used to measure u-PA, t-PA, and PAI-1 concentrations; the valve calcification was evaluated using biphotonic absorptiometry. Aortic stenosis valves expressed normal plasminogen activators concentrations and overexpressed PAI-1 (u-PA, t-PA, and PAI-1 mean concentrations were, resp., 1.69 ng/mL ± 0.80, 2.76 ng/mL ± 1.33, and 53.27 ng/mL ± 36.39). There was no correlation between u-PA and PAI-1 (r = 0.3) but t-PA and PAI-1 were strongly correlated with each other (r = 0.6). Over expression of PAI-1 was proportional to the calcium content of theAS valves. Our results demonstrate a consistent increase of PAI-1 proportional to the calcification. The over expression of PAI-1 may be useful as a predictive indicator in patients with aortic stenosis.

Keywords: aortic valve, PAI-1, tPA gene, uPA gene

Procedia PDF Downloads 468
762 Reducing the Risk of Alcohol Relapse after Liver-Transplantation

Authors: Rebeca V. Tholen, Elaine Bundy

Abstract:

Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.

Keywords: alcoholism, liver transplant, quality improvement, substance abuse

Procedia PDF Downloads 109
761 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 267
760 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 128
759 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia

Authors: The Danh Phan

Abstract:

House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.

Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise

Procedia PDF Downloads 219
758 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 242
757 Application of Granular Computing Paradigm in Knowledge Induction

Authors: Iftikhar U. Sikder

Abstract:

This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.

Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction

Procedia PDF Downloads 525
756 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids

Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino

Abstract:

It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.

Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors

Procedia PDF Downloads 138
755 Risks for Cyanobacteria Harmful Algal Blooms in Georgia Piedmont Waterbodies Due to Land Management and Climate Interactions

Authors: Sam Weber, Deepak Mishra, Susan Wilde, Elizabeth Kramer

Abstract:

The frequency and severity of cyanobacteria harmful blooms (CyanoHABs) have been increasing over time, with point and non-point source eutrophication and shifting climate paradigms being blamed as the primary culprits. Excessive nutrients, warm temperatures, quiescent water, and heavy and less regular rainfall create more conducive environments for CyanoHABs. CyanoHABs have the potential to produce a spectrum of toxins that cause gastrointestinal stress, organ failure, and even death in humans and animals. To promote enhanced, proactive CyanoHAB management, risk modeling using geospatial tools can act as predictive mechanisms to supplement current CyanoHAB monitoring, management and mitigation efforts. The risk maps would empower water managers to focus their efforts on high risk water bodies in an attempt to prevent CyanoHABs before they occur, and/or more diligently observe those waterbodies. For this research, exploratory spatial data analysis techniques were used to identify the strongest predicators for CyanoHAB blooms based on remote sensing-derived cyanobacteria cell density values for 771 waterbodies in the Georgia Piedmont and landscape characteristics of their watersheds. In-situ datasets for cyanobacteria cell density, nutrients, temperature, and rainfall patterns are not widely available, so free gridded geospatial datasets were used as proxy variables for assessing CyanoHAB risk. For example, the percent of a watershed that is agriculture was used as a proxy for nutrient loading, and the summer precipitation within a watershed was used as a proxy for water quiescence. Cyanobacteria cell density values were calculated using atmospherically corrected images from the European Space Agency’s Sentinel-2A satellite and multispectral instrument sensor at a 10-meter ground resolution. Seventeen explanatory variables were calculated for each watershed utilizing the multi-petabyte geospatial catalogs available within the Google Earth Engine cloud computing interface. The seventeen variables were then used in a multiple linear regression model, and the strongest predictors of cyanobacteria cell density were selected for the final regression model. The seventeen explanatory variables included land cover composition, winter and summer temperature and precipitation data, topographic derivatives, vegetation index anomalies, and soil characteristics. Watershed maximum summer temperature, percent agriculture, percent forest, percent impervious, and waterbody area emerged as the strongest predictors of cyanobacteria cell density with an adjusted R-squared value of 0.31 and a p-value ~ 0. The final regression equation was used to make a normalized cyanobacteria cell density index, and a Jenks Natural Break classification was used to assign waterbodies designations of low, medium, or high risk. Of the 771 waterbodies, 24.38% were low risk, 37.35% were medium risk, and 38.26% were high risk. This study showed that there are significant relationships between free geospatial datasets representing summer maximum temperatures, nutrient loading associated with land use and land cover, and the area of a waterbody with cyanobacteria cell density. This data analytics approach to CyanoHAB risk assessment corroborated the literature-established environmental triggers for CyanoHABs, and presents a novel approach for CyanoHAB risk mapping in waterbodies across the greater southeastern United States.

Keywords: cyanobacteria, land use/land cover, remote sensing, risk mapping

Procedia PDF Downloads 208
754 Adoption of Big Data by Global Chemical Industries

Authors: Ashiff Khan, A. Seetharaman, Abhijit Dasgupta

Abstract:

The new era of big data (BD) is influencing chemical industries tremendously, providing several opportunities to reshape the way they operate and help them shift towards intelligent manufacturing. Given the availability of free software and the large amount of real-time data generated and stored in process plants, chemical industries are still in the early stages of big data adoption. The industry is just starting to realize the importance of the large amount of data it owns to make the right decisions and support its strategies. This article explores the importance of professional competencies and data science that influence BD in chemical industries to help it move towards intelligent manufacturing fast and reliable. This article utilizes a literature review and identifies potential applications in the chemical industry to move from conventional methods to a data-driven approach. The scope of this document is limited to the adoption of BD in chemical industries and the variables identified in this article. To achieve this objective, government, academia, and industry must work together to overcome all present and future challenges.

Keywords: chemical engineering, big data analytics, industrial revolution, professional competence, data science

Procedia PDF Downloads 80
753 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 94
752 The Study of Flood Resilient House in Ebo-Town

Authors: Alagie Salieu Nankey

Abstract:

Flood-resistant house is the key mechanism to withstand flood hazards in Ebo-Town. It emerged simple yet powerful way of mitigating flooding in the community of Ebo- Town. Even though there are different types of buildings, little is known yet how and why flood affects building severely. In this paper, we examine three different types of flood-resistant buildings that are suitable for Ebo Town. We gather content and contextual features from six (6) respondents and used this data set to identify factors that are significantly associated with the flood-resistant house. Moreover, we built a suitable design concept. We found that amongst all the theories studied in the literature study Slit or Elevated House is the most suitable building design in Ebo-Town and Pile foundation is the most appropriate foundation type in the study area. Amongst contextual features, local materials are the most economical materials for the proposed design. This research proposes a framework that explains the theoretical relationships between flood hazard zones and flood-resistant houses in Ebo Town. Moreover, this research informs the design of sense-making and analytics tools for the resistant house.

Keywords: flood-resistant, slit, flood hazard zone, pile foundation

Procedia PDF Downloads 35
751 Fair Value Accounting and Evolution of the Ohlson Model

Authors: Mohamed Zaher Bouaziz

Abstract:

Our study examines the Ohlson Model, which links a company's market value to its equity and net earnings, in the context of the evolution of the Canadian accounting model, characterized by more extensive use of fair value and a broader measure of performance after IFRS adoption. Our hypothesis is that if equity is reported at its fair value, this valuation is closely linked to market capitalization, so the weight of earnings weakens or even disappears in the Ohlson Model. Drawing on Canada's adoption of the International Financial Reporting Standards (IFRS), our results support our hypothesis that equity appears to include most of the relevant information for investors, while earnings have become less important. However, the predictive power of earnings does not disappear.

Keywords: fair value accounting, Ohlson model, IFRS adoption, value-relevance of equity and earnings

Procedia PDF Downloads 183
750 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet

Authors: Justin Woulfe

Abstract:

Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.

Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics

Procedia PDF Downloads 155
749 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 130
748 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 328
747 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE

Authors: Oualid Walid Ben Ali

Abstract:

Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.

Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE

Procedia PDF Downloads 487
746 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 99
745 Intrusion Detection Based on Graph Oriented Big Data Analytics

Authors: Ahlem Abid, Farah Jemili

Abstract:

Intrusion detection has been the subject of numerous studies in industry and academia, but cyber security analysts always want greater precision and global threat analysis to secure their systems in cyberspace. To improve intrusion detection system, the visualisation of the security events in form of graphs and diagrams is important to improve the accuracy of alerts. In this paper, we propose an approach of an IDS based on cloud computing, big data technique and using a machine learning graph algorithm which can detect in real time different attacks as early as possible. We use the MAWILab intrusion detection dataset . We choose Microsoft Azure as a unified cloud environment to load our dataset on. We implement the k2 algorithm which is a graphical machine learning algorithm to classify attacks. Our system showed a good performance due to the graphical machine learning algorithm and spark structured streaming engine.

Keywords: Apache Spark Streaming, Graph, Intrusion detection, k2 algorithm, Machine Learning, MAWILab, Microsoft Azure Cloud

Procedia PDF Downloads 141
744 The Duty of Application and Connection Providers Regarding the Supply of Internet Protocol by Court Order in Brazil to Determine Authorship of Acts Practiced on the Internet

Authors: João Pedro Albino, Ana Cláudia Pires Ferreira de Lima

Abstract:

Humanity has undergone a transformation from the physical to the virtual world, generating an enormous amount of data on the world wide web, known as big data. Many facts that occur in the physical world or in the digital world are proven through records made on the internet, such as digital photographs, posts on social media, contract acceptances by digital platforms, email, banking, and messaging applications, among others. These data recorded on the internet have been used as evidence in judicial proceedings. The identification of internet users is essential for the security of legal relationships. This research was carried out on scientific articles and materials from courses and lectures, with an analysis of Brazilian legislation and some judicial decisions on the request of static data from logs and Internet Protocols (IPs) from application and connection providers. In this article, we will address the determination of authorship of data processing on the internet by obtaining the IP address and the appropriate judicial procedure for this purpose under Brazilian law.

Keywords: IP address, digital forensics, big data, data analytics, information and communication technology

Procedia PDF Downloads 119
743 Attention Problems among Adolescents: Examining Educational Environments

Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte

Abstract:

This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.

Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling

Procedia PDF Downloads 274
742 Investigation into the Role of Leadership in the Management of Digital Transformation for Small and Medium Enterprises

Authors: Francesco Coraci, Abdul-Hadi G. Abulrub

Abstract:

Digital technology is transforming the landscape of the industrial sector at a precedential level by connecting people, processes, and machines in real-time. It represents the means for a new pathway to achieve innovative, dynamic competitive advantages, deliver unique customers’ values, and sustain critical relationships. Thus, success in a constantly changing environment is governed by the ability of an organization to revolutionize their business models, deliver innovative solutions, and capture values from big data analytics and insights. Businesses need to re-strategize operations and develop extra capabilities to cope with the necessity for additional flexibility and agility. The traditional “command and control” leadership style is structurally and operationally incompatible with the digital era. In this paper, the authors discuss how transformational leaders can act as a glue in the social, organizational context, which is crucial to enable the workforce and develop a psychological attachment to the digital vision.

Keywords: internet of things, strategy, change leadership, dynamic competitive advantage, digital transformation

Procedia PDF Downloads 119
741 Point-of-Interest Recommender Systems for Location-Based Social Network Services

Authors: Hoyeon Park, Yunhwan Keon, Kyoung-Jae Kim

Abstract:

Location Based Social Network services (LBSNs) is a new term that combines location based service and social network service (SNS). Unlike traditional SNS, LBSNs emphasizes empirical elements in the user's actual physical location. Point-of-Interest (POI) is the most important factor to implement LBSNs recommendation system. POI information is the most popular spot in the area. In this study, we would like to recommend POI to users in a specific area through recommendation system using collaborative filtering. The process is as follows: first, we will use different data sets based on Seoul and New York to find interesting results on human behavior. Secondly, based on the location-based activity information obtained from the personalized LBSNs, we have devised a new rating that defines the user's preference for the area. Finally, we have developed an automated rating algorithm from massive raw data using distributed systems to reduce advertising costs of LBSNs.

Keywords: location-based social network services, point-of-interest, recommender systems, business analytics

Procedia PDF Downloads 226
740 Advancing Dialysis Care Access and Health Information Management: A Blueprint for Nairobi Hospital

Authors: Kimberly Winnie Achieng Otieno

Abstract:

The Nairobi Hospital plays a pivotal role in healthcare provision in East and Central Africa, yet it faces challenges in providing accessible dialysis care. This paper explores strategic interventions to enhance dialysis care, improve access and streamline health information management, with an aim of fostering an integrated and patient-centered healthcare system in our region. Challenges at The Nairobi Hospital The Nairobi Hospital currently grapples with insufficient dialysis machines which results in extended turn around times. This issue stems from both staffing bottle necks and infrastructural limitations given our growing demand for renal care services. Our Paper-based record keeping system and fragmented flow of information downstream hinders the hospital’s ability to manage health data effectively. There is also a need for investment in expanding The Nairobi Hospital dialysis facilities to far reaching communities. Setting up satellite clinics that are closer to people who live in areas far from the main hospital will ensure better access to underserved areas. Community Outreach and Education Implementing education programs on kidney health within local communities is vital for early detection and prevention. Collaborating with local leaders and organizations can establish a proactive approach to renal health hence reducing the demand for acute dialysis interventions. We can amplify this effort by expanding The Nairobi Hospital’s corporate social responsibility outreach program with weekend engagement activities such as walks, awareness classes and fund drives. Enhancing Efficiency in Dialysis Care Demand for dialysis services continues to rise due to an aging Kenyan population and the increasing prevalence of chronic kidney disease (CKD). Present at this years International Nursing Conference are a diverse group of caregivers from around the world who can share with us their process optimization strategies, patient engagement techniques and resource utilization efficiencies to catapult The Nairobi Hospital to the 21st century and beyond. Plans are underway to offer ongoing education opportunities to keep staff updated on best practices and emerging technologies in addition to utilizing a patient feedback mechanisms to identify areas for improvement and enhance satisfaction. Staff empowerment and suggestion boxes address The Nairobi Hospital’s organizational challenges. Current financial constraints may limit a leapfrog in technology integration such as the acquisition of new dialysis machines and an investment in predictive analytics to forecast patient needs and optimize resource allocation. Streamlining Health Information Management Fully embracing a shift to 100% Electronic Health Records (EHRs) is a transformative step toward efficient health information management. Shared information promotes a holistic understanding of patients’ medical history, minimizing redundancies and enhancing overall care quality. To manage the transition to community-based care and EHRs effectively, a phased implementation approach is recommended. Conclusion By strategically enhancing dialysis care access and streamlining health information management, The Nairobi Hospital can strengthen its position as a leading healthcare institution in both East and Central Africa. This comprehensive approach aligns with the hospital’s commitment to providing high-quality, accessible, and patient-centered care in an evolving landscape of healthcare delivery.

Keywords: Africa, urology, diaylsis, healthcare

Procedia PDF Downloads 52
739 Investigating the Effect of Artificial Intelligence on the Improvement of Green Supply Chain in Industry

Authors: Sepinoud Hamedi

Abstract:

Over the past few decades, companies have appeared developing concerns in connection to the natural affect of their fabricating exercises. Green supply chain administration has been considered by the producers as a attainable choice to decrease the natural affect of operations whereas at the same time moving forward their operational execution. Contemporaneously the coming of digitalization and globalization within the supply chain space has driven to a developing acknowledgment of the importance of data preparing methodologies, such as enormous information analytics and fake insights innovations, in improving and optimizing supply chain execution. Also, supply chain collaboration in part intervenes the relationship between manufactured innovation and supply chain execution Ponders appear that the use of BDA-AI advances includes a significant impact on natural handle integration and green supply chain collaboration conjointly underlines that both natural handle integration and green supply chain collaboration have a critical affect on natural execution. Correspondingly savvy supply chain contributes to green execution through overseeing green connections and setting up green operations.

Keywords: green supply chain, artificial intelligence, manufacturers, technology, environmental

Procedia PDF Downloads 65