Search results for: Software Models
2828 Theoretical Modal Analysis of Freely and Simply Supported RC Slabs
Authors: M. S. Ahmed, F. A. Mohammad
Abstract:
This paper focuses on the dynamic behavior of reinforced concrete (RC) slabs. Therefore, the theoretical modal analysis was performed using two different types of boundary conditions. Modal analysis method is the most important dynamic analyses. The analysis would be modal case when there is no external force on the structure. By using this method in this paper, the effects of freely and simply supported boundary conditions on the frequencies and mode shapes of RC square slabs are studied. ANSYS software was employed to derive the finite element model to determine the natural frequencies and mode shapes of the slabs. Then, the obtained results through numerical analysis (finite element analysis) would be compared with the exact solution. The main goal of the research study is to predict how the boundary conditions change the behavior of the slab structures prior to performing experimental modal analysis. Based on the results, it is concluded that simply support boundary condition has obvious influence to increase the natural frequencies and change the shape of the mode when it is compared with freely supported boundary condition of slabs. This means that such support conditions have the direct influence on the dynamic behavior of the slabs. Thus, it is suggested to use free-free boundary condition in experimental modal analysis to precisely reflect the properties of the structure. By using free-free boundary conditions, the influence of poorly defined supports is interrupted.
Keywords: Natural frequencies, Mode shapes, Modal analysis, ANSYS software, RC slabs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38212827 Dynamic Variational Multiscale LES of Bluff Body Flows on Unstructured Grids
Authors: Carine Moussaed, Stephen Wornom, Bruno Koobus, Maria Vittoria Salvetti, Alain Dervieux,
Abstract:
The effects of dynamic subgrid scale (SGS) models are investigated in variational multiscale (VMS) LES simulations of bluff body flows. The spatial discretization is based on a mixed finite element/finite volume formulation on unstructured grids. In the VMS approach used in this work, the separation between the largest and the smallest resolved scales is obtained through a variational projection operator and a finite volume cell agglomeration. The dynamic version of Smagorinsky and WALE SGS models are used to account for the effects of the unresolved scales. In the VMS approach, these effects are only modeled in the smallest resolved scales. The dynamic VMS-LES approach is applied to the simulation of the flow around a circular cylinder at Reynolds numbers 3900 and 20000 and to the flow around a square cylinder at Reynolds numbers 22000 and 175000. It is observed as in previous studies that the dynamic SGS procedure has a smaller impact on the results within the VMS approach than in LES. But improvements are demonstrated for important feature like recirculating part of the flow. The global prediction is improved for a small computational extra cost.Keywords: variational multiscale LES, dynamic SGS model, unstructured grids, circular cylinder, square cylinder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18242826 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: Travel characteristics analysis, transportation choice, travel sharing rate, neural network model, traffic resource allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6142825 Phenomenological Ductile Fracture Criteria Applied to the Cutting Process
Authors: František Šebek, Petr Kubík, Jindřich Petruška, Jiří Hůlka
Abstract:
Present study is aimed on the cutting process of circular cross-section rods where the fracture is used to separate one rod into two pieces. Incorporating the phenomenological ductile fracture model into the explicit formulation of finite element method, the process can be analyzed without the necessity of realizing too many real experiments which could be expensive in case of repetitive testing in different conditions. In the present paper, the steel AISI 1045 was examined and the tensile tests of smooth and notched cylindrical bars were conducted together with biaxial testing of the notched tube specimens to calibrate material constants of selected phenomenological ductile fracture models. These were implemented into the Abaqus/Explicit through user subroutine VUMAT and used for cutting process simulation. As the calibration process is based on variables which cannot be obtained directly from experiments, numerical simulations of fracture tests are inevitable part of the calibration. Finally, experiments regarding the cutting process were carried out and predictive capability of selected fracture models is discussed. Concluding remarks then make the summary of gained experience both with the calibration and application of particular ductile fracture criteria.
Keywords: Ductile fracture, phenomenological criteria, cutting process, explicit formulation, AISI 1045 steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25912824 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection
Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay
Abstract:
With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.
Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5442823 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems
Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano
Abstract:
The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.Keywords: EIoT, machine learning, anomaly detection, environment monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10272822 Kinetic Modeling of the Fischer-Tropsch Reactions and Modeling Steady State Heterogeneous Reactor
Authors: M. Ahmadi Marvast, M. Sohrabi, H. Ganji
Abstract:
The rate of production of main products of the Fischer-Tropsch reactions over Fe/HZSM5 bifunctional catalyst in a fixed bed reactor is investigated at a broad range of temperature, pressure, space velocity, H2/CO feed molar ratio and CO2, CH4 and water flow rates. Model discrimination and parameter estimation were performed according to the integral method of kinetic analysis. Due to lack of mechanism development for Fisher – Tropsch Synthesis on bifunctional catalysts, 26 different models were tested and the best model is selected. Comprehensive one and two dimensional heterogeneous reactor models are developed to simulate the performance of fixed-bed Fischer – Tropsch reactors. To reduce computational time for optimization purposes, an Artificial Feed Forward Neural Network (AFFNN) has been used to describe intra particle mass and heat transfer diffusion in the catalyst pellet. It is seen that products' reaction rates have direct relation with H2 partial pressure and reverse relation with CO partial pressure. The results show that the hybrid model has good agreement with rigorous mechanistic model, favoring that the hybrid model is about 25-30 times faster.
Keywords: Fischer-Tropsch, heterogeneous modeling, kinetic study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28202821 An Application for Risk of Crime Prediction Using Machine Learning
Authors: Luis Fonseca, Filipe Cabral Pinto, Susana Sargento
Abstract:
The increase of the world population, especially in large urban centers, has resulted in new challenges particularly with the control and optimization of public safety. Thus, in the present work, a solution is proposed for the prediction of criminal occurrences in a city based on historical data of incidents and demographic information. The entire research and implementation will be presented start with the data collection from its original source, the treatment and transformations applied to them, choice and the evaluation and implementation of the Machine Learning model up to the application layer. Classification models will be implemented to predict criminal risk for a given time interval and location. Machine Learning algorithms such as Random Forest, Neural Networks, K-Nearest Neighbors and Logistic Regression will be used to predict occurrences, and their performance will be compared according to the data processing and transformation used. The results show that the use of Machine Learning techniques helps to anticipate criminal occurrences, which contributed to the reinforcement of public security. Finally, the models were implemented on a platform that will provide an API to enable other entities to make requests for predictions in real-time. An application will also be presented where it is possible to show criminal predictions visually.Keywords: Crime prediction, machine learning, public safety, smart city.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13252820 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco
Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui
Abstract:
The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).
Keywords: Landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate, Morocco.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9892819 Study of Aero-thermal Effects with Heat Radiation in Optical Side Window
Authors: Chun-Chi Li, Da-Wei Huang, Yin-Chia Su, Liang-Chih Tasi
Abstract:
In hypersonic environments, the aerothermal effect makes it difficult for the optical side windows of optical guided missiles to withstand high heat. This produces cracking or breaking, resulting in an inability to function. This study used computational fluid mechanics to investigate the external cooling jet conditions of optical side windows. The turbulent models k-ε and k-ω were simulated. To be in better accord with actual aerothermal environments, a thermal radiation model was added to examine suitable amounts of external coolants and the optical window problems of aero-thermodynamics. The simulation results indicate that when there are no external cooling jets, because airflow on the optical window and the tail groove produce vortices, the temperatures in these two locations reach a peak of approximately 1600 K. When the external cooling jets worked at 0.15 kg/s, the surface temperature of the optical windows dropped to approximately 280 K. When adding thermal radiation conditions, because heat flux dissipation was faster, the surface temperature of the optical windows fell from 280 K to approximately 260 K. The difference in influence of the different turbulence models k-ε and k-ω on optical window surface temperature was not significant.Keywords: aero-optical side window, aerothermal effect, cooling, hypersonic flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31162818 The DAQ Debugger for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8932817 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents
Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi
Abstract:
In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.
Keywords: 3D QSAR, CoMSIA, Triazoles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14802816 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region
Authors: Mohammad Bakhshi, Firas Al Janabi
Abstract:
High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.
Keywords: DiMoN tool, disaggregation, exceedance probability, Kolmogorov-Smirnov Test, rainfall.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10072815 Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion
Authors: Elena Ezhova, Vadim Mottl, Olga Krasotkina
Abstract:
The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.Keywords: Time varying regression, time-volatility of regression coefficients, Akaike Information Criterion (AIC), Kullback information maximization principle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15342814 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models
Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour
Abstract:
Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.
Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22192813 A Programming Assessment Software Artefact Enhanced with the Help of Learners
Authors: Romeo A. Botes, Imelda Smit
Abstract:
The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.
Keywords: Programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9942812 Students- Perception of the Evaluation System in Architecture Studios
Authors: Badiossadat Hassanpour, Nangkula Utaberta, Azami Zaharim, Nurakmal Goh Abdullah
Abstract:
Architecture education was based on apprenticeship models and its nature has not changed much during long period but the Source of changes was its evaluation process and system. It is undeniable that art and architecture education is completely based on transmitting knowledge from instructor to students. In contrast to other majors this transmitting is by iteration and practice and studio masters try to control the design process and improving skills in the form of supervision and criticizing. Also the evaluation will end by giving marks to students- achievements. Therefore the importance of the evaluation and assessment role is obvious and it is not irrelevant to say that if we want to know about the architecture education system, we must first study its assessment procedures. The evolution of these changes in western countries has literate and documented well. However it seems that this procedure has unregarded in Malaysia and there is a severe lack of research and documentation in this area. Malaysia as an under developing and multicultural country which is involved different races and cultures is a proper origin for scrutinizing and understanding the evaluation systems and acceptability amount of current implemented models to keep the evaluation and assessment procedure abreast with needs of different generations, cultures and even genders. This paper attempts to answer the questions of how evaluation and assessments are performed and how students perceive this evaluation system in the context Malaysia. The main advantage of this work is that it contributes in international debate on evaluation model.Keywords: Architecture, assessment, design studio, learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28792811 Environmental Effects on Energy Consumption of Smart Grid Consumers
Authors: S. M. Ali, A. Salam Khan, A. U. Khan, M. Tariq, M. S. Hussain, B. A. Abbasi, I. Hussain, U. Farid
Abstract:
Environment and surrounding plays a pivotal rule in structuring life-style of the consumers. Living standards intern effect the energy consumption of the consumers. In smart grid paradigm, climate drifts, weather parameter and green environmental directly relates to the energy profiles of the various consumers, such as residential, commercial and industrial. Considering above factors helps policy in shaping utility load curves and optimal management of demand and supply. Thus, there is a pressing need to develop correlation models of load and weather parameters and critical analysis of the factors effecting energy profiles of smart grid consumers. In this paper, we elaborated various environment and weather parameter factors effecting demand of consumers. Moreover, we developed correlation models, such as Pearson, Spearman, and Kendall, an inter-relation between dependent (load) parameter and independent (weather) parameters. Furthermore, we validated our discussion with real-time data of Texas State. The numerical simulations proved the effective relation of climatic drifts with energy consumption of smart grid consumers.
Keywords: Climatic drifts, correlation analysis, energy consumption, smart grid, weather parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17802810 Emulation of a Wind Turbine Using Induction Motor Driven by Field Oriented Control
Authors: L. Benaaouinate, M. Khafallah, A. Martinez, A. Mesbahi, T. Bouragba
Abstract:
This paper concerns with the modeling, simulation, and emulation of a wind turbine emulator for standalone wind energy conversion systems. By using emulation system, we aim to reproduce the dynamic behavior of the wind turbine torque on the generator shaft: it provides the testing facilities to optimize generator control strategies in a controlled environment, without reliance on natural resources. The aerodynamic, mechanical, electrical models have been detailed as well as the control of pitch angle using Fuzzy Logic for horizontal axis wind turbines. The wind turbine emulator consists mainly of an induction motor with AC power drive with torque control. The control of the induction motor and the mathematical models of the wind turbine are designed with MATLAB/Simulink environment. The simulation results confirm the effectiveness of the induction motor control system and the functionality of the wind turbine emulator for providing all necessary parameters of the wind turbine system such as wind speed, output torque, power coefficient and tip speed ratio. The findings are of direct practical relevance.
Keywords: Wind turbine, modeling, emulator, electrical generator, renewable energy, induction motor drive, field oriented control, real time control, wind turbine emulator, pitch angle control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13752809 A Review on Cloud Computing and Internet of Things
Authors: Sahar S. Tabrizi, Dogan Ibrahim
Abstract:
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.
Keywords: Cloud computing, cloud services, IaaS, PaaS, SaaS, IoT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13902808 End-to-End Spanish-English Sequence Learning Translation Model
Authors: Vidhu Mitha Goutham, Ruma Mukherjee
Abstract:
The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.
Keywords: Attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4752807 ECG-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.
Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1882806 The Impact of Regulatory Changes on the Development of Mobile Medical Apps
Abstract:
Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.
Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20632805 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.
Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14642804 Finite Element Analysis of Sheet Metal Airbending Using Hyperform LS-DYNA
Authors: Himanshu V. Gajjar, Anish H. Gandhi, Harit K. Raval
Abstract:
Air bending is one of the important metal forming processes, because of its simplicity and large field application. Accuracy of analytical and empirical models reported for the analysis of bending processes is governed by simplifying assumption and do not consider the effect of dynamic parameters. Number of researches is reported on the finite element analysis (FEA) of V-bending, Ubending, and air V-bending processes. FEA of bending is found to be very sensitive to many physical and numerical parameters. FE models must be computationally efficient for practical use. Reported work shows the 3D FEA of air bending process using Hyperform LSDYNA and its comparison with, published 3D FEA results of air bending in Ansys LS-DYNA and experimental results. Observing the planer symmetry and based on the assumption of plane strain condition, air bending problem was modeled in 2D with symmetric boundary condition in width. Stress-strain results of 2D FEA were compared with 3D FEA results and experiments. Simplification of air bending problem from 3D to 2D resulted into tremendous reduction in the solution time with only marginal effect on stressstrain results. FE model simplification by studying the problem symmetry is more efficient and practical approach for solution of more complex large dimensions slow forming processes.Keywords: Air V-bending, Finite element analysis, HyperformLS-DYNA, Planner symmetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32092803 Vibration Suppression of Timoshenko Beams with Embedded Piezoelectrics Using POF
Authors: T. C. Manjunath, B. Bandyopadhyay
Abstract:
This paper deals with the design of a periodic output feedback controller for a flexible beam structure modeled with Timoshenko beam theory, Finite Element Method, State space methods and embedded piezoelectrics concept. The first 3 modes are considered in modeling the beam. The main objective of this work is to control the vibrations of the beam when subjected to an external force. Shear piezoelectric sensors and actuators are embedded into the top and bottom layers of a flexible aluminum beam structure, thus making it intelligent and self-adaptive. The composite beam is divided into 5 finite elements and the control actuator is placed at finite element position 1, whereas the sensor is varied from position 2 to 5, i.e., from the nearby fixed end to the free end. 4 state space SISO models are thus developed. Periodic Output Feedback (POF) Controllers are designed for the 4 SISO models of the same plant to control the flexural vibrations. The effect of placing the sensor at different locations on the beam is observed and the performance of the controller is evaluated for vibration control. Conclusions are finally drawn.Keywords: Smart structure, Timoshenko beam theory, Periodic output feedback control, Finite Element Method, State space model, SISO, Embedded sensors and actuators, Vibration control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21332802 Diagnosis of the Abdominal Aorta Aneurysm in Magnetic Resonance Imaging Images
Authors: W. Kultangwattana, K. Somkantha, P. Phuangsuwan
Abstract:
This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.
Keywords: Adbominal Aorta Aneurysm, Bayesian Classifier, Snakes Model, Texture Feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15922801 Perceptions of Educators on the Learners’ Youngest Age for the Introduction of ICTs in Schools: A Personality Theory Approach
Authors: K. E. Oyetade, S. D. Eyono Obono
Abstract:
Age ratings are very helpful in providing parents with relevant information for the purchase and use of digital technologies by the children; this is why the non-definition of age ratings for the use of ICTs by children in schools is a major concern; and this problem serves as a motivation for this study whose aim is to examine the factors affecting the perceptions of educators on the learners’ youngest age for the introduction of ICTs in schools. This aim is achieved through two types of research objectives: the identification and design of theories and models on age ratings, and the empirical testing of such theories and models in a survey of educators from the Camperdown district of the South African KwaZulu-Natal province. A questionnaire is used for the collection of the data of this survey whose validity and reliability is checked in SPSS prior to its descriptive and correlative quantitative analysis. The main hypothesis supporting this research is the association between the demographics of educators, their personality, and their perceptions on the learners’ youngest age for the introduction of ICTs in schools; as claimed by existing research; except that the present study looks at personality from three dimensions: self-actualized personalities, fully functioning personalities, and healthy personalities. This hypothesis was fully confirmed by the empirical study conducted by this research except for the demographic factor where only the educators’ grade or class was found to be associated with the personality of educators.
Keywords: Age ratings, Educators, E-learning, Personality Theories.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18422800 Complex Condition Monitoring System of Aircraft Gas Turbine Engine
Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev
Abstract:
Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25452799 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA
Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini
Abstract:
Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3256