Search results for: integrative model of behavior prediction
21995 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 17921994 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites
Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler
Abstract:
Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.Keywords: failure, strength, stress concentration, unidirectional composites
Procedia PDF Downloads 15521993 Using High Performance Computing for Online Flood Monitoring and Prediction
Authors: Stepan Kuchar, Martin Golasowski, Radim Vavrik, Michal Podhoranyi, Boris Sir, Jan Martinovic
Abstract:
The main goal of this article is to describe the online flood monitoring and prediction system Floreon+ primarily developed for the Moravian-Silesian region in the Czech Republic and the basic process it uses for running automatic rainfall-runoff and hydrodynamic simulations along with their calibration and uncertainty modeling. It takes a long time to execute such process sequentially, which is not acceptable in the online scenario, so the use of high-performance computing environment is proposed for all parts of the process to shorten their duration. Finally, a case study on the Ostravice river catchment is presented that shows actual durations and their gain from the parallel implementation.Keywords: flood prediction process, high performance computing, online flood prediction system, parallelization
Procedia PDF Downloads 49221992 Numerical Analysis of Effect of Crack Location on the Crack Breathing Behavior
Authors: H. M. Mobarak, Helen Wu, Keqin Xiao
Abstract:
In this work, a three-dimensional finite element model was developed to investigate the crack breathing behavior at different crack locations considering the effect of unbalance force. A two-disk rotor with a crack is simulated using ABAQUS. The duration of each crack status (open, closed and partially open/closed) during a full shaft rotation was examined to analyse the crack breathing behavior. Unbalanced shaft crack breathing behavior was found to be different at different crack locations. The breathing behavior of crack along the shaft length is divided into different regions depending on the unbalance force and crack location. The simulated results in this work can be further utilised to obtain the time-varying stiffness matrix of the cracked shaft element under the influence of unbalance force.Keywords: crack breathing, crack location, slant crack, unbalance force, rotating shaft
Procedia PDF Downloads 27221991 Starlink Satellite Collision Probability Simulation Based on Simplified Geometry Model
Authors: Toby Li, Julian Zhu
Abstract:
In this paper, a model based on a simplified geometry is introduced to give a very conservative collision probability prediction for the Starlink satellite in its most densely clustered region. Under the model in this paper, the probability of collision for Starlink satellite where it clustered most densely is found to be 8.484 ∗ 10^−4. It is found that the predicted collision probability increased nonlinearly with the increased safety distance set. This simple model provides evidence that the continuous development of maneuver avoidance systems is necessary for the future of the orbital safety of satellites under the harsher Lower Earth Orbit environment.Keywords: Starlink, collision probability, debris, geometry model
Procedia PDF Downloads 8221990 Life Prediction of Condenser Tubes Applying Fuzzy Logic and Neural Network Algorithms
Authors: A. Majidian
Abstract:
The life prediction of thermal power plant components is necessary to prevent the unexpected outages, optimize maintenance tasks in periodic overhauls and plan inspection tasks with their schedules. One of the main critical components in a power plant is condenser because its failure can affect many other components which are positioned in downstream of condenser. This paper deals with factors affecting life of condenser. Failure rates dependency vs. these factors has been investigated using Artificial Neural Network (ANN) and fuzzy logic algorithms. These algorithms have shown their capabilities as dynamic tools to evaluate life prediction of power plant equipments.Keywords: life prediction, condenser tube, neural network, fuzzy logic
Procedia PDF Downloads 35121989 Modeling of the Dynamic Characteristics of a Spindle with Experimental Validation
Authors: Jhe-Hao Huang, Kun-Da Wu, Wei-Cheng Shih, Jui-Pin Hung
Abstract:
This study presented the investigation on the dynamic characteristics of a spindle tool system by experimental and finite element modeling approaches. As well known facts, the machining stability is greatly determined by the dynamic characteristics of the spindle tool system. Therefore, understanding the factors affecting dynamic behavior of a spindle tooling system is a prerequisite in dominating the final machining performance of machine tool system. To this purpose, a physical spindle unit was employed to assess the dynamic characteristics by vibration tests. Then, a three-dimensional finite element model of a high-speed spindle system integrated with tool holder was created to simulate the dynamic behaviors. For modeling the angular contact bearings, a series of spring elements were introduced between the inner and outer rings. The spring constant can be represented by the contact stiffness of the rolling bearing based on Hertz theory. The interface characteristic between spindle nose and tool holder taper can be quantified from the comparison of the measurements and predictions. According to the results obtained from experiments and finite element predictions, the vibration behavior of the spindle is dominated by the bending deformation of the spindle shaft in different modes, which is further determined by the stiffness of the bearings in spindle housing. Also, the spindle unit with tool holder shows a different dynamic behavior from that of spindle without tool holder. This indicates the interface property between tool holder and spindle nose plays an dominance on the dynamic characteristics the spindle tool system. Overall, the dynamic behaviors the spindle with and without tool holder can be successfully investigated through the finite element model proposed in this study. The prediction accuracy is determined by the modeling of the rolling interface of ball bearings in spindles and the interface characteristics between tool holder and spindle nose. Besides, identifications of the interface characteristics of a ball bearing and spindle tool holder are important for the refinement of the spindle tooling system to achieve the optimum machining performance.Keywords: contact stiffness, dynamic characteristics, spindle, tool holder interface
Procedia PDF Downloads 29821988 Predicting National Football League (NFL) Match with Score-Based System
Authors: Marcho Setiawan Handok, Samuel S. Lemma, Abdoulaye Fofana, Naseef Mansoor
Abstract:
This paper is proposing a method to predict the outcome of the National Football League match with data from 2019 to 2022 and compare it with other popular models. The model uses open-source statistical data of each team, such as passing yards, rushing yards, fumbles lost, and scoring. Each statistical data has offensive and defensive. For instance, a data set of anticipated values for a specific matchup is created by comparing the offensive passing yards obtained by one team to the defensive passing yards given by the opposition. We evaluated the model’s performance by contrasting its result with those of established prediction algorithms. This research is using a neural network to predict the score of a National Football League match and then predict the winner of the game.Keywords: game prediction, NFL, football, artificial neural network
Procedia PDF Downloads 8421987 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object
Procedia PDF Downloads 23321986 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry
Authors: Deepika Christopher, Garima Anand
Abstract:
To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications
Procedia PDF Downloads 5721985 Social and Cognitive Stress Impact on Neuroscience and PTSD
Authors: Sadra Abbasi
Abstract:
The complex connection between psychological stress and the onset of different diseases has been an ongoing issue in the mental health field for a long time. Multiple studies have demonstrated that long-term stress can greatly heighten the likelihood of developing health issues like heart disease, cancer, arthritis, and severe depression. Recent research in cognitive science has provided insight into the intricate processes involved in posttraumatic stress disorder (PTSD), suggesting that distinct memory systems are accountable for both vivid reliving and normal autobiographical memories of traumatic incidents, as proposed by dual representation theory. This theory has important consequences for our comprehension of the neural mechanisms involved in fear and behavior related to threats, highlighting the amygdala-hippocampus-medial prefrontal cortex circuit as a crucial component in this process. This particular circuit, extensively researched in behavioral neuroscience, is essential for regulating the body's reactions to stress and trauma. This review will examine how incorporating a modern neuroscience viewpoint into an integrative case formulation offers a current way to comprehend the intricate connections among psychological stress, trauma, and disease.Keywords: social, cognitive, stress, neuroscience, behavior, PTSD
Procedia PDF Downloads 3621984 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs
Authors: Malo Pocheau-Lesteven, Olivier Le Maître
Abstract:
Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program
Procedia PDF Downloads 15721983 Traffic Analysis and Prediction Using Closed-Circuit Television Systems
Authors: Aragorn Joaquin Pineda Dela Cruz
Abstract:
Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps.Keywords: intelligent transportation system, vehicle detection, traffic analysis, deep learning, machine learning, computer vision, traffic prediction
Procedia PDF Downloads 10221982 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in the early diagnosis of the acute heart attacks is obvious. The purpose of this study is to determine how well a predictive model would perform based on the only patient-reportable clinical history factors, without using diagnostic tests or physical exams. This type of the prediction model might have application outside of the hospital setting to give accurate advice to patients to influence them to seek care in appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea, and vomiting were selected as the main features.Keywords: Coronary heart disease, Acute heart attacks, Prediction, Logistic regression
Procedia PDF Downloads 44921981 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 12521980 Study the Effect of Friction on Barreling Behavior during Upsetting Process Using Anand Model
Authors: H. Mohammadi Majd, M. Jalali Azizpour, V. Tavaf, A. Jaderi
Abstract:
In upsetting processes contact friction significantly influence metal flow, stress-strain state and process parameters. Furthermore, tribological conditions influence workpiece deformation and its dimensional precision. A viscoplastic constitutive law, the Anand model, was applied to represent the inelastic deformation behavior in upsetting process. This paper presents research results of the influence of contact friction coefficient on a workpiece deformation in upsetting process.finite element parameters. This technique was tested for three different specimens simulations of the upsetting and the corresponding material and can be successfully employed to predict the deformation of the upsetting process.Keywords: friction, upsetting, barreling, Anand model
Procedia PDF Downloads 33621979 Taleghan Dam Break Numerical Modeling
Authors: Hamid Goharnejad, Milad Sadeghpoor Moalem, Mahmood Zakeri Niri, Leili Sadeghi Khalegh Abadi
Abstract:
While there are many benefits to using reservoir dams, their break leads to destructive effects. From the viewpoint of International Committee of Large Dams (ICOLD), dam break means the collapse of whole or some parts of a dam; thereby the dam will be unable to hold water. Therefore, studying dam break phenomenon and prediction of its behavior and effects reduces losses and damages of the mentioned phenomenon. One of the most common types of reservoir dams is embankment dam. Overtopping in embankment dams occurs because of flood discharge system inability in release inflows to reservoir. One of the most important issues among managers and engineers to evaluate the performance of the reservoir dam rim when sliding into the storage, creating waves is large and long. In this study, the effects of floods which caused the overtopping of the dam have been investigated. It was assumed that spillway is unable to release the inflow. To determine outflow hydrograph resulting from dam break, numerical model using Flow-3D software and empirical equations was used. Results of numerical models and their comparison with empirical equations show that numerical model and empirical equations can be used to study the flood resulting from dam break.Keywords: embankment dam break, empirical equations, Taleghan dam, Flow-3D numerical model
Procedia PDF Downloads 32121978 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 53521977 Artificial Intelligence Methods for Returns Expectations in Financial Markets
Authors: Yosra Mefteh Rekik, Younes Boujelbene
Abstract:
We introduce in this paper a new conceptual model representing the stock market dynamics. This model is essentially based on cognitive behavior of the intelligence investors. In order to validate our model, we build an artificial stock market simulation based on agent-oriented methodologies. The proposed simulator is composed of market supervisor agent essentially responsible for executing transactions via an order book and various kinds of investor agents depending to their profile. The purpose of this simulation is to understand the influence of psychological character of an investor and its neighborhood on its decision-making and their impact on the market in terms of price fluctuations. Therefore, the difficulty of the prediction is due to several features: the complexity, the non-linearity and the dynamism of the financial market system, as well as the investor psychology. The Artificial Neural Networks learning mechanism take on the role of traders, who from their futures return expectations and place orders based on their expectations. The results of intensive analysis indicate that the existence of agents having heterogeneous beliefs and preferences has provided a better understanding of price dynamics in the financial market.Keywords: artificial intelligence methods, artificial stock market, behavioral modeling, multi-agent based simulation
Procedia PDF Downloads 44521976 Developing a Hybrid Method to Diagnose and Predict Sports Related Concussions with Machine Learning
Authors: Melody Yin
Abstract:
Concussions impact a large amount of adolescents; they make up as much as half of the diagnosed concussions in America. This research proposes a hybrid machine learning model based on the combination of human/knowledge-based domains and computer-generated feature rankings to improve the accuracy of diagnosing sports related concussion (SRC). Using a data set of symptoms collected on the sideline post-SRC events, the symptom selection criteria method has been developed by using Google AutoML's important score function to identify the top 10 symptom features. In addition, symptom domains have been introduced as another parameter, categorizing the symptoms into physical, cognitive, sleep, and emotional domains. The hybrid machine learning model has been trained with a combination of the top 10 symptoms and 4 domains. From the results, the hybrid model was the best performer for symptom resolution time prediction in 2 and 4-week thresholds. This research is a proof of concept study in the use of domains along with machine learning in order to improve concussion prediction accuracy. It is also possible that the use of domains can make the model more efficient due to reduced training time. This research examines the use of a hybrid method in predicting sports-related concussion. This achievement is based on data preprocessing, using a hybrid method to select criteria to achieve high performance.Keywords: hybrid model, machine learning, sports related concussion, symptom resolution time
Procedia PDF Downloads 16821975 Facility Data Model as Integration and Interoperability Platform
Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes
Abstract:
Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.Keywords: airport ontology, energy management, facility data model, ontology modeling
Procedia PDF Downloads 44821974 Prediction of Thermodynamic Properties of N-Heptane in the Critical Region
Authors: Sabrina Ladjama, Aicha Rizi, Azzedine Abbaci
Abstract:
In this work, we use the crossover model to formulate a comprehensive fundamental equation of state for the thermodynamic properties for several n-alkanes in the critical region that extends to the classical region. This equation of state is constructed on the basis of comparison of selected measurements of pressure-density-temperature data, isochoric and isobaric heat capacity. The model can be applied in a wide range of temperatures and densities around the critical point for n-heptane. It is found that the developed model represents most of the reliable experimental data accurately.Keywords: crossover model, critical region, fundamental equation, n-heptane
Procedia PDF Downloads 47421973 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 13421972 Evaluation of Spatial Distribution Prediction for Site-Scale Soil Contaminants Based on Partition Interpolation
Authors: Pengwei Qiao, Sucai Yang, Wenxia Wei
Abstract:
Soil pollution has become an important issue in China. Accurate spatial distribution prediction of pollutants with interpolation methods is the basis for soil remediation in the site. However, a relatively strong variability of pollutants would decrease the prediction accuracy. Theoretically, partition interpolation can result in accurate prediction results. In order to verify the applicability of partition interpolation for a site, benzo (b) fluoranthene (BbF) in four soil layers was adopted as the research object in this paper. IDW (inverse distance weighting)-, RBF (radial basis function)-and OK (ordinary kriging)-based partition interpolation accuracies were evaluated, and their influential factors were analyzed; then, the uncertainty and applicability of partition interpolation were determined. Three conclusions were drawn. (1) The prediction error of partitioned interpolation decreased by 70% compared to unpartitioned interpolation. (2) Partition interpolation reduced the impact of high CV (coefficient of variation) and high concentration value on the prediction accuracy. (3) The prediction accuracy of IDW-based partition interpolation was higher than that of RBF- and OK-based partition interpolation, and it was suitable for the identification of highly polluted areas at a contaminated site. These results provide a useful method to obtain relatively accurate spatial distribution information of pollutants and to identify highly polluted areas, which is important for soil pollution remediation in the site.Keywords: accuracy, applicability, partition interpolation, site, soil pollution, uncertainty
Procedia PDF Downloads 14421971 Numerical Approach of RC Structural MembersExposed to Fire and After-Cooling Analysis
Authors: Ju-young Hwang, Hyo-Gyoung Kwak, Hong Jae Yim
Abstract:
This paper introduces a numerical analysis method for reinforced-concrete (RC) structures exposed to fire and compares the result with experimental results. The proposed analysis method for RC structure under the high temperature consists of two procedures. First step is to decide the temperature distribution across the section through the heat transfer analysis by using the time-temperature curve. After determination of the temperature distribution, the nonlinear analysis is followed. By considering material and geometrical non-linearity with the temperature distribution, nonlinear analysis predicts the behavior of RC structure under the fire by the exposed time. The proposed method is validated by the comparison with the experimental results. Finally, Prediction model to describe the status of after-cooling concrete can also be introduced based on the results of additional experiment. The product of this study is expected to be embedded for smart structure monitoring system against fire in u-City.Keywords: RC structures, heat transfer analysis, nonlinear analysis, after-cooling concrete model
Procedia PDF Downloads 36821970 A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario
Authors: Vinod Kumar Jaysaval, Prateek Agarwal
Abstract:
Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain.Keywords: airborne radar, blind zone, clutter, probability of detection
Procedia PDF Downloads 47021969 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 10621968 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.Keywords: apartment complex, big data, life-cycle building value analysis, machine learning
Procedia PDF Downloads 37421967 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)
Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar
Abstract:
Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow
Procedia PDF Downloads 16321966 Uplink Throughput Prediction in Cellular Mobile Networks
Authors: Engin Eyceyurt, Josko Zec
Abstract:
The current and future cellular mobile communication networks generate enormous amounts of data. Networks have become extremely complex with extensive space of parameters, features and counters. These networks are unmanageable with legacy methods and an enhanced design and optimization approach is necessary that is increasingly reliant on machine learning. This paper proposes that machine learning as a viable approach for uplink throughput prediction. LTE radio metric, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and Signal to Noise Ratio (SNR) are used to train models to estimate expected uplink throughput. The prediction accuracy with high determination coefficient of 91.2% is obtained from measurements collected with a simple smartphone application.Keywords: drive test, LTE, machine learning, uplink throughput prediction
Procedia PDF Downloads 157