Search results for: predictive tracking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1847

Search results for: predictive tracking

1157 Numerical Simulation of Flow and Particle Motion in Liquid – Solid Hydrocyclone

Authors: Seyed Roozbeh Pishva, Alireza Aboudi Asl

Abstract:

In this investigation a hydrocyclone by using for separation particles from fluid in oil and gas, mining and other industries is simulated. Case study is cone – cylindrical and solid - liquid hydrocyclone. The fluid is water and the solid is a type of silis having diameters of 53, 75, 106, 150, 212, 250, and 300 micron. In this investigation CFD method used for analysis flow and movement of particles in hydrocyclone. In this modeling flow is three-dimention, turbulence and RSM model have been used for solving. Particles are three dimensional, spherical and non rotating and for tracking them Lagrangian model is used. The results of this study in addition to analyzing flowfield, obtaining efficiency of hydrocyclone in 5, 7, 12, and 15 percent concentrations and compare them with experimental result that both of them had suitable agreement with each other.

Keywords: hydrocyclone, RSM Model, CFD, copper industry

Procedia PDF Downloads 565
1156 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 142
1155 Is School Misbehavior a Decision: Implications for School Guidance

Authors: Rachel C. F. Sun

Abstract:

This study examined the predictive effects of moral competence, prosocial norms and positive behavior recognition on school misbehavior among Chinese junior secondary school students. Results of multiple regression analysis showed that students were more likely to misbehave in school when they had lower levels of moral competence and prosocial norms, and when they perceived their positive behavior being less likely recognized. Practical implications were discussed on how to guide students to make the right choices to behave appropriately in school. Implications for future research were also discussed.

Keywords: moral competence, positive behavior recognition, prosocial norms, school misbehavior

Procedia PDF Downloads 378
1154 Modelling and Control of Electrohydraulic System Using Fuzzy Logic Algorithm

Authors: Hajara Abdulkarim Aliyu, Abdulbasid Ismail Isa

Abstract:

This research paper studies electrohydraulic system for its role in position and motion control system and develops as mathematical model describing the behaviour of the system. The research further proposes Fuzzy logic and conventional PID controllers in order to achieve both accurate positioning of the payload and overall improvement of the system performance. The simulation result shows Fuzzy logic controller has a superior tracking performance and high disturbance rejection efficiency for its shorter settling time, less overshoot, smaller values of integral of absolute and deviation errors over the conventional PID controller at all the testing conditions.

Keywords: electrohydraulic, fuzzy logic, modelling, NZ-PID

Procedia PDF Downloads 462
1153 Thermal Effect in Power Electrical for HEMTs Devices with InAlN/GaN

Authors: Zakarya Kourdi, Mohammed Khaouani, Benyounes Bouazza, Ahlam Guen-Bouazza, Amine Boursali

Abstract:

In this paper, we have evaluated the thermal effect for high electron mobility transistors (HEMTs) heterostructure InAlN/GaN with a gate length 30nm high-performance. It also shows the analysis and simulated these devices, and how can be used in different application. The simulator Tcad-Silvaco software has used for predictive results good for the DC, AC and RF characteristic, Devices offered max drain current 0.67A; transconductance is 720 mS/mm the unilateral power gain of 180 dB. A cutoff frequency of 385 GHz, and max frequency 810 GHz These results confirm the feasibility of using HEMTs with InAlN/GaN in high power amplifiers, as well as thermal places.

Keywords: HEMT, Thermal Effect, Silvaco, InAlN/GaN

Procedia PDF Downloads 463
1152 Expression of uPA, tPA, and PAI-1 in Calcified Aortic Valves

Authors: Abdullah M. Alzahrani

Abstract:

Our physiopathological assumption is that u-PA, t-PA, and PAI-1 are released by calcified aortic valves and play a role in the calcification of these valves. Sixty-five calcified aortic valves were collected from patients suffering from aortic stenosis. Each valve was incubated for 24 hours in culture medium. The supernatants were used to measure u-PA, t-PA, and PAI-1 concentrations; the valve calcification was evaluated using biphotonic absorptiometry. Aortic stenosis valves expressed normal plasminogen activators concentrations and overexpressed PAI-1 (u-PA, t-PA, and PAI-1 mean concentrations were, resp., 1.69 ng/mL ± 0.80, 2.76 ng/mL ± 1.33, and 53.27 ng/mL ± 36.39). There was no correlation between u-PA and PAI-1 (r = 0.3) but t-PA and PAI-1 were strongly correlated with each other (r = 0.6). Over expression of PAI-1 was proportional to the calcium content of theAS valves. Our results demonstrate a consistent increase of PAI-1 proportional to the calcification. The over expression of PAI-1 may be useful as a predictive indicator in patients with aortic stenosis.

Keywords: aortic valve, PAI-1, tPA gene, uPA gene

Procedia PDF Downloads 468
1151 Development of a Spatial Data for Renal Registry in Nigeria Health Sector

Authors: Adekunle Kolawole Ojo, Idowu Peter Adebayo, Egwuche Sylvester O.

Abstract:

Chronic Kidney Disease (CKD) is a significant cause of morbidity and mortality across developed and developing nations and is associated with increased risk. There are no existing electronic means of capturing and monitoring CKD in Nigeria. The work is aimed at developing a spatial data model that can be used to implement renal registries required for tracking and monitoring the spatial distribution of renal diseases by public health officers and patients. In this study, we have developed a spatial data model for a functional renal registry.

Keywords: renal registry, health informatics, chronic kidney disease, interface

Procedia PDF Downloads 199
1150 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation

Authors: Abdulmajid Mukhtar Afat

Abstract:

This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.

Keywords: finite automata, subset construction, DFA, NFA

Procedia PDF Downloads 424
1149 Reducing the Risk of Alcohol Relapse after Liver-Transplantation

Authors: Rebeca V. Tholen, Elaine Bundy

Abstract:

Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.

Keywords: alcoholism, liver transplant, quality improvement, substance abuse

Procedia PDF Downloads 110
1148 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 128
1147 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 393
1146 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 242
1145 An Android Geofencing App for Autonomous Remote Switch Control

Authors: Jamie Wong, Daisy Sang, Chang-Shyh Peng

Abstract:

Geofence is a virtual fence defined by a preset physical radius around a target location. Geofencing App provides location-based services which define the actionable operations upon the crossing of a geofence. Geofencing requires continual location tracking, which can consume noticeable amount of battery power. Additionally, location updates need to be frequent and accurate or order so that actions can be triggered within an expected time window after the mobile user navigate through the geofence. In this paper, we build an Android mobile geofencing Application to remotely and autonomously control a power switch.

Keywords: location based service, geofence, autonomous, remote switch

Procedia PDF Downloads 311
1144 Application of Granular Computing Paradigm in Knowledge Induction

Authors: Iftikhar U. Sikder

Abstract:

This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.

Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction

Procedia PDF Downloads 525
1143 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 95
1142 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids

Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino

Abstract:

It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.

Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors

Procedia PDF Downloads 138
1141 Systems Versioning: A Features-Based Meta-Modeling Approach

Authors: Ola A. Younis, Said Ghoul

Abstract:

Systems running these days are huge, complex and exist in many versions. Controlling these versions and tracking their changes became a very hard process as some versions are created using meaningless names or specifications. Many versions of a system are created with no clear difference between them. This leads to mismatching between a user’s request and the version he gets. In this paper, we present a system versions meta-modeling approach that produces versions based on system’s features. This model reduced the number of steps needed to configure a release and gave each version its unique specifications. This approach is applicable for systems that use features in its specification.

Keywords: features, meta-modeling, semantic modeling, SPL, VCS, versioning

Procedia PDF Downloads 439
1140 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 95
1139 SVM-DTC Using for PMSM Speed Tracking Control

Authors: Kendouci Khedidja, Mazari Benyounes, Benhadria Mohamed Rachid, Dadi Rachida

Abstract:

In recent years, direct torque control (DTC) has become an alternative to the well-known vector control especially for permanent magnet synchronous motor (PMSM). However, it presents a problem of field linkage and torque ripple. In order to solve this problem, the conventional DTC is combined with space vector pulse width modulation (SVPWM). This control theory has achieved great success in the control of PMSM. That has become a hotspot for resolving. The main objective of this paper gives us an introduction of the DTC and SVPWM-DTC control theory of PMSM which has been simulating on each part of the system via Matlab/Simulink based on the mathematical modeling. Moreover, the outcome of the simulation proved that the improved SVPWM- DTC of PMSM has a good dynamic and static performance.

Keywords: PMSM, DTC, SVM, speed control

Procedia PDF Downloads 383
1138 Kuehne + Nagel's PharmaChain: IoT-Enabled Product Monitoring Using Radio Frequency Identification

Authors: Rebecca Angeles

Abstract:

This case study features the Kuehne + Nagel PharmaChain solution for ‘cold chain’ pharmaceutical and biologic product shipments with IOT-enabled features for shipment temperature and location tracking. Using the case study method and content analysis, this research project investigates the application of the structurational model of technology theory introduced by Orlikowski in order to interpret the firm’s entry and participation in the IOT-impelled marketplace.

Keywords: Internet of Things (IOT), radio frequency identification (RFID), structurational model of technology (Orlikowski), supply chain management

Procedia PDF Downloads 229
1137 CERD: Cost Effective Route Discovery in Mobile Ad Hoc Networks

Authors: Anuradha Banerjee

Abstract:

A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence, we require energy efficient route discovery technique to enhance their lifetime and network performance. In this paper, we propose an energy-efficient route discovery technique CERD that greatly reduces the number of route requests flooded into the network and also gives priority to the route request packets sent from the routers that has communicated with the destination very recently, in single or multi-hop paths. This does not only enhance the lifetime of nodes but also decreases the delay in tracking the destination.

Keywords: ad hoc network, energy efficiency, flooding, node lifetime, route discovery

Procedia PDF Downloads 339
1136 Fair Value Accounting and Evolution of the Ohlson Model

Authors: Mohamed Zaher Bouaziz

Abstract:

Our study examines the Ohlson Model, which links a company's market value to its equity and net earnings, in the context of the evolution of the Canadian accounting model, characterized by more extensive use of fair value and a broader measure of performance after IFRS adoption. Our hypothesis is that if equity is reported at its fair value, this valuation is closely linked to market capitalization, so the weight of earnings weakens or even disappears in the Ohlson Model. Drawing on Canada's adoption of the International Financial Reporting Standards (IFRS), our results support our hypothesis that equity appears to include most of the relevant information for investors, while earnings have become less important. However, the predictive power of earnings does not disappear.

Keywords: fair value accounting, Ohlson model, IFRS adoption, value-relevance of equity and earnings

Procedia PDF Downloads 183
1135 Predicting the Success of Bank Telemarketing Using Artificial Neural Network

Authors: Mokrane Selma

Abstract:

The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.

Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network

Procedia PDF Downloads 147
1134 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 130
1133 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 329
1132 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 99
1131 Attention Problems among Adolescents: Examining Educational Environments

Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte

Abstract:

This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.

Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling

Procedia PDF Downloads 274
1130 A Prospective Neurosurgical Registry Evaluating the Clinical Care of Traumatic Brain Injury Patients Presenting to Mulago National Referral Hospital in Uganda

Authors: Benjamin J. Kuo, Silvia D. Vaca, Joao Ricardo Nickenig Vissoci, Catherine A. Staton, Linda Xu, Michael Muhumuza, Hussein Ssenyonjo, John Mukasa, Joel Kiryabwire, Lydia Nanjula, Christine Muhumuza, Henry E. Rice, Gerald A. Grant, Michael M. Haglund

Abstract:

Background: Traumatic Brain Injury (TBI) is disproportionally concentrated in low- and middle-income countries (LMICs), with the odds of dying from TBI in Uganda more than 4 times higher than in high income countries (HICs). The disparities in the injury incidence and outcome between LMICs and resource-rich settings have led to increased health outcomes research for TBIs and their associated risk factors in LMICs. While there have been increasing TBI studies in LMICs over the last decade, there is still a need for more robust prospective registries. In Uganda, a trauma registry implemented in 2004 at the Mulago National Referral Hospital (MNRH) showed that RTI is the major contributor (60%) of overall mortality in the casualty department. While the prior registry provides information on injury incidence and burden, it’s limited in scope and doesn’t follow patients longitudinally throughout their hospital stay nor does it focus specifically on TBIs. And although these retrospective analyses are helpful for benchmarking TBI outcomes, they make it hard to identify specific quality improvement initiatives. The relationship among epidemiology, patient risk factors, clinical care, and TBI outcomes are still relatively unknown at MNRH. Objective: The objectives of this study are to describe the processes of care and determine risk factors predictive of poor outcomes for TBI patients presenting to a single tertiary hospital in Uganda. Methods: Prospective data were collected for 563 TBI patients presenting to a tertiary hospital in Kampala from 1 June – 30 November 2016. Research Electronic Data Capture (REDCap) was used to systematically collect variables spanning 8 categories. Univariate and multivariate analysis were conducted to determine significant predictors of mortality. Results: 563 TBI patients were enrolled from 1 June – 30 November 2016. 102 patients (18%) received surgery, 29 patients (5.1%) intended for surgery failed to receive it, and 251 patients (45%) received non-operative management. Overall mortality was 9.6%, which ranged from 4.7% for mild and moderate TBI to 55% for severe TBI patients with GCS 3-5. Within each TBI severity category, mortality differed by management pathway. Variables predictive of mortality were TBI severity, more than one intracranial bleed, failure to receive surgery, high dependency unit admission, ventilator support outside of surgery, and hospital arrival delayed by more than 4 hours. Conclusions: The overall mortality rate of 9.6% in Uganda for TBI is high, and likely underestimates the true TBI mortality. Furthermore, the wide-ranging mortality (3-82%), high ICU fatality, and negative impact of care delays suggest shortcomings with the current triaging practices. Lack of surgical intervention when needed was highly predictive of mortality in TBI patients. Further research into the determinants of surgical interventions, quality of step-up care, and prolonged care delays are needed to better understand the complex interplay of variables that affect patient outcome. These insights guide the development of future interventions and resource allocation to improve patient outcomes.

Keywords: care continuum, global neurosurgery, Kampala Uganda, LMIC, Mulago, prospective registry, traumatic brain injury

Procedia PDF Downloads 230
1129 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks

Authors: Kais Manai

Abstract:

The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.

Keywords: nuclear emulsion, particle identification, tracking, neural network

Procedia PDF Downloads 497
1128 Using Mining Methods of WEKA to Predict Quran Verb Tense and Aspect in Translations from Arabic to English: Experimental Results and Analysis

Authors: Jawharah Alasmari

Abstract:

In verb inflection, tense marks past/present/future action, and aspect marks progressive/continues perfect/completed actions. This usage and meaning of tense and aspect differ in Arabic and English. In this research, we applied data mining methods to test the predictive function of candidate features by using our dataset of Arabic verbs in-context, and their 7 translations. Weka machine learning classifiers is used in this experiment in order to examine the key features that can be used to provide guidance to enable a translator’s appropriate English translation of the Arabic verb tense and aspect.

Keywords: Arabic verb, English translations, mining methods, Weka software

Procedia PDF Downloads 267