Search results for: digital terrain models
375 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.
Keywords: Science education, interdisciplinary learning, nuclear science; scientific literacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819374 Computational Modeling in Strategic Marketing
Authors: Petr Cernohorsky, Jan Voracek
Abstract:
Well-developed strategic marketing planning is the essential prerequisite for establishment of the right and unique competitive advantage. Typical market, however, is a heterogeneous and decentralized structure with natural involvement of individual or group subjectivity and irrationality. These features cannot be fully expressed with one-shot rigorous formal models based on, e.g. mathematics, statistics or empirical formulas. We present an innovative solution, extending the domain of agent based computational economics towards the concept of hybrid modeling in service provider and consumer market such as telecommunications. The behavior of the market is described by two classes of agents - consumer and service provider agents - whose internal dynamics are fundamentally different. Customers are rather free multi-state structures, adjusting behavior and preferences quickly in accordance with time and changing environment. Producers, on the contrary, are traditionally structured companies with comparable internal processes and specific managerial policies. Their business momentum is higher and immediate reaction possibilities limited. This limitation underlines importance of proper strategic planning as the main process advising managers in time whether to continue with more or less the same business or whether to consider the need for future structural changes that would ensure retention of existing customers or acquisition of new ones.Keywords: Agent-based computational economics, hybrid modeling, strategic marketing, system dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641373 Analysis of Hard Turning Process of AISI D3-Thermal Aspects
Authors: B. Varaprasad, C. Srinivasa Rao
Abstract:
In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of hard turning by using commercial software DEFORM 3D has been compared to experimental results of stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.Keywords: Hard-turning, computer-aided engineering, computational machining, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353372 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique
Authors: J.V.R.Ravindra, M.B.Srinivas,
Abstract:
Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.Keywords: Model order Reduction, RLC, crosstalk
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652371 Optimization of Kinematics for Birds and UAVs Using Evolutionary Algorithms
Authors: Mohamed Hamdaoui, Jean-Baptiste Mouret, Stephane Doncieux, Pierre Sagaut
Abstract:
The aim of this work is to present a multi-objective optimization method to find maximum efficiency kinematics for a flapping wing unmanned aerial vehicle. We restrained our study to rectangular wings with the same profile along the span and to harmonic dihedral motion. It is assumed that the birdlike aerial vehicle (whose span and surface area were fixed respectively to 1m and 0.15m2) is in horizontal mechanically balanced motion at fixed speed. We used two flight physics models to describe the vehicle aerodynamic performances, namely DeLaurier-s model, which has been used in many studies dealing with flapping wings, and the model proposed by Dae-Kwan et al. Then, a constrained multi-objective optimization of the propulsive efficiency is performed using a recent evolutionary multi-objective algorithm called є-MOEA. Firstly, we show that feasible solutions (i.e. solutions that fulfil the imposed constraints) can be obtained using Dae-Kwan et al.-s model. Secondly, we highlight that a single objective optimization approach (weighted sum method for example) can also give optimal solutions as good as the multi-objective one which nevertheless offers the advantage of directly generating the set of the best trade-offs. Finally, we show that the DeLaurier-s model does not yield feasible solutions.
Keywords: Flight physics, evolutionary algorithm, optimization, Pareto surface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646370 Artificial Neural Networks Technique for Seismic Hazard Prediction Using Seismic Bumps
Authors: Belkacem Selma, Boumediene Selma, Samira Chouraqui, Hanifi Missoum, Tourkia Guerzou
Abstract:
Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. Earthquake prediction to prevent the loss of human lives and even property damage is an important factor; that, is why it is crucial to develop techniques for predicting this natural disaster. This study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 104 J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines have been analyzed. The results obtained show that the ANN is able to predict earthquake parameters with high accuracy; the classification accuracy through neural networks is more than 94%, and the models developed are efficient and robust and depend only weakly on the initial database.
Keywords: Earthquake prediction, artificial intelligence, AI, Artificial Neural Network, ANN, seismic bumps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1186369 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems
Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil
Abstract:
Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.
Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2722368 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback
Authors: P. Nafisi Poor, P. Javid
Abstract:
Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.
Keywords: Adaptive buildings, energy efficiency, intelligent buildings, user comfortability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677367 Grid Coordination with Marketmaker Agents
Authors: Xin Bai, Kresimir Sivoncik, Damla Turgut, Ladislau Bölöni
Abstract:
Market based models are frequently used in the resource allocation on the computational grid. However, as the size of the grid grows, it becomes difficult for the customer to negotiate directly with all the providers. Middle agents are introduced to mediate between the providers and customers and facilitate the resource allocation process. The most frequently deployed middle agents are the matchmakers and the brokers. The matchmaking agent finds possible candidate providers who can satisfy the requirements of the consumers, after which the customer directly negotiates with the candidates. The broker agents are mediating the negotiation with the providers in real time. In this paper we present a new type of middle agent, the marketmaker. Its operation is based on two parallel operations - through the investment process the marketmaker is acquiring resources and resource reservations in large quantities, while through the resale process it sells them to the customers. The operation of the marketmaker is based on the fact that through its global view of the grid it can perform a more efficient resource allocation than the one possible in one-to-one negotiations between the customers and providers. We present the operation and algorithms governing the operation of the marketmaker agent, contrasting it with the matchmaker and broker agents. Through a series of simulations in the task oriented domain we compare the operation of the three agents types. We find that the use of marketmaker agent leads to a better performance in the allocation of large tasks and a significant reduction of the messaging overhead.Keywords: grid computing, autonomous agents, market-basedgrid
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530366 Internal Structure Formation in High Strength Fiber Concrete during Casting
Authors: Olga Kononova, Andrejs Krasnikovs , Videvuds Lapsa, Jurijs Kalinka, Angelina Galushchak
Abstract:
Post cracking behavior and load –bearing capacity of the steel fiber reinforced high-strength concrete (SFRHSC) are dependent on the number of fibers are crossing the weakest crack (bridged the crack) and their orientation to the crack surface. Filling the mould by SFRHSC, fibers are moving and rotating with the concrete matrix flow till the motion stops in each internal point of the concrete body. Filling the same mould from the different ends SFRHSC samples with the different internal structures (and different strength) can be obtained. Numerical flow simulations (using Newton and Bingham flow models) were realized, as well as single fiber planar motion and rotation numerical and experimental investigation (in viscous flow) was performed. X-ray pictures for prismatic samples were obtained and internal fiber positions and orientations were analyzed. Similarly fiber positions and orientations in cracked cross-section were recognized and were compared with numerically simulated. Structural SFRHSC fracture model was created based on single fiber pull-out laws, which were determined experimentally. Model predictions were validated by 15x15x60cm prisms 4 point bending tests.Keywords: fibers, orientation, high strength concrete, flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445365 Catchment Yield Prediction in an Ungauged Basin Using PyTOPKAPI
Authors: B. S. Fatoyinbo, D. Stretch, O. T. Amoo, D. Allopi
Abstract:
This study extends the use of the Drainage Area Regionalization (DAR) method in generating synthetic data and calibrating PyTOPKAPI stream yield for an ungauged basin at a daily time scale. The generation of runoff in determining a river yield has been subjected to various topographic and spatial meteorological variables, which integers form the Catchment Characteristics Model (CCM). Many of the conventional CCM models adapted in Africa have been challenged with a paucity of adequate, relevance and accurate data to parameterize and validate the potential. The purpose of generating synthetic flow is to test a hydrological model, which will not suffer from the impact of very low flows or very high flows, thus allowing to check whether the model is structurally sound enough or not. The employed physically-based, watershed-scale hydrologic model (PyTOPKAPI) was parameterized with GIS-pre-processing parameters and remote sensing hydro-meteorological variables. The validation with mean annual runoff ratio proposes a decent graphical understanding between observed and the simulated discharge. The Nash-Sutcliffe efficiency and coefficient of determination (R²) values of 0.704 and 0.739 proves strong model efficiency. Given the current climate variability impact, water planner can now assert a tool for flow quantification and sustainable planning purposes.
Keywords: Ungauged Basin, Catchment Characteristics Model, Synthetic data, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1311364 Requirements Driven Multiple View Paradigm for Developing Security Architecture
Authors: K. Chandra Sekaran
Abstract:
This paper describes a paradigmatic approach to develop architecture of secure systems by describing the requirements from four different points of view: that of the owner, the administrator, the user, and the network. Deriving requirements and developing architecture implies the joint elicitation and describing the problem and the structure of the solution. The view points proposed in this paper are those we consider as requirements towards their contributions as major parties in the design, implementation, usage and maintenance of secure systems. The dramatic growth of the technology of Internet and the applications deployed in World Wide Web have lead to the situation where the security has become a very important concern in the development of secure systems. Many security approaches are currently being used in organizations. In spite of the widespread use of many different security solutions, the security remains a problem. It is argued that the approach that is described in this paper for the development of secure architecture is practical by all means. The models representing these multiple points of view are termed the requirements model (views of owner and administrator) and the operations model (views of user and network). In this paper, this multiple view paradigm is explained by first describing the specific requirements and or characteristics of secure systems (particularly in the domain of networks) and the secure architecture / system development methodology.
Keywords: Multiple view paradigms, requirements model, operations model, secure system, owner, administrator, user, network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370363 Investigation of the Effect of Number of Story on Different Structural Components of RC Building
Authors: Zasiah Tafheem, Mahadee Hasan Shourav, Zahidul Islam, Saima Islam Tumpa
Abstract:
The paper aims at investigating the effect of number of story on different structural components of reinforced concrete building due to gravity and lateral loading. For the study, three building models having same building plan of three, six and nine stories are analyzed and designed using software package. All the buildings are residential and are located in Dhaka city of Bangladesh. Lateral load including wind and earthquake loading are applied to the building along both longitudinal and transverse direction as per Bangladesh National Building Code (BNBC, 2006). Equivalent static force method is followed for the applied seismic loading. The present study investigates as well as compares mainly total steel requirement in different structural components for those buildings. It has been found that total longitudinal steel requirement for beams at each floor is 48.57% for three storied building, 61.36% for six storied building when the total percentage is taken as 100% in case of nine storied building. For an exterior column, the steel ratio is 2.1%, 3.06%, 4.55% for three, six and nine storied building respectively for the first three floors. In addition, it has been noted that total weight of longitudinal reinforcement of an interior column is 14.02 % for threestoried building and 43.12% for six storied building when the total reinforcement is considered 100% for nine storied building for the first three floors.Keywords: Equivalent Static Force Method, longitudinal reinforcement, seismic loading, steel ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785362 Design and Fabrication of Stent with Negative Poisson’s Ratio
Authors: S. K. Bhullar, J. Ko, F. Ahmed, M. B. G. Jun
Abstract:
The negative Poisson’s ratios can be described in terms of models based on the geometry of the system and the way this geometry changes due to applied loads. As the Poisson’s ratio does not depend on scale hence deformation can take place at the nano to macro level the only requirement is the right combination of the geometry. Our thrust in this paper is to combine our knowledge of tailored enhanced mechanical properties of the materials having negative Poisson’s ratio with the micromachining and electrospining technology to develop a novel stent carrying a drug delivery system. Therefore, the objective of this paper includes (i) fabrication of a micromachined metal sheet tailored with structure having negative Poisson’s ratio through rotating solid squares geometry using femtosecond laser ablation; (ii) rolling fabricated structure and welding to make a tubular structure (iii) wrapping it with nanofibers of biocompatible polymer PCL (polycaprolactone) for drug delivery (iv) analysis of the functional and mechanical performance of fabricated structure analytically and experimentally. Further, as the applications concerned, tubular structures have potential in biomedical for example hollow tubes called stents are placed inside to provide mechanical support to a damaged artery or diseased region and to open a blocked esophagus thus allowing feeding capacity and improving quality of life.
Keywords: Micromachining, electrospining, auxetic materials, enhanced mechanical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3689361 Transformability in Post-Earthquake Houses in Iran: with Special Focus on Lar City
Authors: M. Parva, K. Dola, F. Pour Rahimian
Abstract:
Earthquake is considered as one of the most catastrophic disasters in Iran, in terms of both short-term and long-term hazards. Due to the particular financial and time constraints in Iran, quickly constructed post-earthquake houses (PEHs) do not fulfill the minimum requirements to be considered as comfortable dwellings for people. Consequently, people often transform PEHs after they start to reside. However, lack of understanding about process, motivation, and results of housing transformation leads to construction of some houses not suitable for future transformations, hence resulting in eventually demolished or abandoned PEHs. This study investigated housing transformations in a natural bed of post-earthquake Lar. This paper reports results of the conducted survey for comparing normal condition housing transformation with post-earthquake housing transformation in order to reveal the factors that affect post-earthquake housing transformation in Iran. The findings proposed the use of a combination of ‘Temporary’ and ‘Permanent’ housing reconstruction models in Iran to provide victims with basic but permanent post-disaster dwellings. It is also suggested that needs for future transformation should be predicted and addressed during early stages of design and development. This study contributes to both research and practice regarding post-earthquake housing reconstruction in Iran by proposing new design approaches and guidelines.
Keywords: Housing transformation, Iran, Lar, post-earthquake housing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877360 Offset Dependent Uniform Delay Mathematical Optimization Model for Signalized Traffic Network Using Differential Evolution Algorithm
Authors: Tahseen Al-Shaikhli, Halim Ceylan, Jonathan Weaver, Osman Nuri Çelik, Onur Gungor Sahin
Abstract:
A concept of uniform delay offset dependent mathematical optimization problem is derived as the main objective for this study using a differential evolution algorithm. Furthermore, the objectives are to control the coordination problem which mainly depends on offset selection, and to estimate the uniform delay based on the offset choice at each signalized intersection. The assumption is the periodic sinusoidal function for arrival and departure patterns. The cycle time is optimized at the entry links and the optimized value is used in the non-entry links as a common cycle time. The offset optimization algorithm is used to calculate the uniform delay at each link. The results are illustrated by using a case study and compared with the canonical uniform delay model derived by Webster and the highway capacity manual’s model. The findings show that the derived model minimizes the total uniform delay to almost half compared to conventional models; the mathematical objective function is robust; the algorithm convergence time is fast.
Keywords: Area traffic control, differential evolution, offset variable, sinusoidal periodic function, traffic flow, uniform delay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366359 Simulation of the Pedestrian Flow in the Tawaf Area Using the Social Force Model
Authors: Zarita Zainuddin, Kumatha Thinakaran, Mohammed Shuaib
Abstract:
In today-s modern world, the number of vehicles is increasing on the road. This causes more people to choose walking instead of traveling using vehicles. Thus, proper planning of pedestrians- paths is important to ensure the safety of pedestrians in a walking area. Crowd dynamics study the pedestrians- behavior and modeling pedestrians- movement to ensure safety in their walking paths. To date, many models have been designed to ease pedestrians- movement. The Social Force Model is widely used among researchers as it is simpler and provides better simulation results. We will discuss the problem regarding the ritual of circumambulating the Ka-aba (Tawaf) where the entrances to this area are usually congested which worsens during the Hajj season. We will use the computer simulation model SimWalk which is based on the Social Force Model to simulate the movement of pilgrims in the Tawaf area. We will first discuss the effect of uni and bi-directional flows at the gates. We will then restrict certain gates to the area as the entrances only and others as exits only. From the simulations, we will study the effect of the distance of other entrances from the beginning line and their effects on the duration of pilgrims circumambulate Ka-aba. We will distribute the pilgrims at the different entrances evenly so that the congestion at the entrances can be reduced. We would also discuss the various locations and designs of barriers at the exits and its effect on the time taken for the pilgrims to exit the Tawaf area.Keywords: circumambulation, Ka'aba, pedestrian flow, SFM, Tawaf , entrance, exit
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773358 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.
Keywords: Anomaly detection, autoencoder, data centers, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742357 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification
Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka
Abstract:
This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.
Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3182356 Numerical Optimization within Vector of Parameters Estimation in Volatility Models
Authors: J. Arneric, A. Rozga
Abstract:
In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2650355 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in early diagnosis of the acute heart attacks is obvious. The main purpose of this study would be to enable patients to become better informed about their condition and to encourage them to seek professional care at an earlier stage in the appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea and vomiting, were selected as the main features.
Keywords: Coronary heart disease, acute heart attacks, prediction, logistic regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425354 Using the Nerlovian Adjustment Model to Assess the Response of Farmers to Price and Other Related Factors: Evidence from Sierra Leone Rice Cultivation
Authors: Alhaji M. H. Conteh, Xiangbin Yan, Alfred V. Gborie
Abstract:
The goal of this study was to increase the awareness of the description and assessments of rice acreage response and to offer mechanisms for agricultural policy scrutiny. The ordinary least square (OLS) technique was utilized to determine the coefficients of acreage response models for the rice varieties. The magnitudes of the coefficients (λ) of both the ROK lagged and NERICA lagged acreages were found positive and highly significant, which indicates that farmers’ adjustment rate was very low. Regarding lagged actual price for both the ROK and NERICE rice varieties, the short-run price elasticitieswere lower than long-run, which is suggesting a long term adjustment of the acreage under the crop.
However, the apparent recommendations for policy transformation are to open farm gate prices and to decrease government’s involvement in agricultural sector especially in the acquisition of agricultural inputs. Impending research have to be centered on how this might be better realized. Necessary conditions should be made available to the private sector by means of minimizing price volatility. In accordance with structural reforms, it is necessary to convey output prices to farmers with minimum distortion. There is need to eradicate price subsidies and control, which generate distortion in the market in addition to huge financial costs.
Keywords: Acreage response, rate of adjustment, rice varieties, Sierra Leone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3790353 Analyses of Socio-Cognitive Identity Styles by Slovak Adolescents
Authors: Blandína Šramová, Gabriel Bianchi, Barbara Lášticová, Katarína Fichnová, Anežka Hamranová
Abstract:
The contribution deals with analysis of identity style at adolescents (N=463) at the age from 16 to 19 (the average age is 17,7 years). We used the Identity Style Inventory by Berzonsky, distinguishing three basic, measured identity styles: informational, normative, diffuse-avoidant identity style and also commitment. The informational identity style influencing on personal adaptability, coping strategies, quality of life and the normative identity style, it means the style in which an individual takes on models of authorities at self-defining were found to have the highest representation in the studied group of adolescents by higher scores at girls in comparison with boys. The normative identity style positively correlates with the informational identity style. The diffuse-avoidant identity style was found to be positively associated with maladaptive decisional strategies, neuroticism and depressive reactions. There is the style, in which the individual shifts aside defining his personality. In our research sample the lowest score represents it and negatively correlates with commitment, it means with coping strategies, thrust in oneself and the surrounding world. The age of adolescents did not significantly differentiate representation of identity style. We were finding the model, in which informational and normative identity style had positive relationship and the informational and diffuseavoidant style had negative relationship, which were determinated with commitment. In the same time the commitment is influenced with other outside factors.Keywords: Identity Style Inventory, Informational IdentityStyle, Normative Identity Style, Diffuse-Avoidant Style, IdentityCommitment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596352 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks
Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly
Abstract:
New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796351 Influence of Outer Corner Radius in Equal Channel Angular Pressing
Authors: Basavaraj V. Patil, Uday Chakkingal, T. S. Prasanna Kumar
Abstract:
Equal Channel Angular Pressing (ECAP) is currently being widely investigated because of its potential to produce ultrafine grained microstructures in metals and alloys. A sound knowledge of the plastic deformation and strain distribution is necessary for understanding the relationships between strain inhomogeneity and die geometry. Considerable research has been reported on finite element analysis of this process, assuming threedimensional plane strain condition. However, the two-dimensional models are not suitable due to the geometry of the dies, especially in cylindrical ones. In the present work, three-dimensional simulation of ECAP process was carried out for six outer corner radii (sharp to 10 mm in steps of 2 mm), with channel angle 105¶Çü▒, for strain hardening aluminium alloy (AA 6101) using ABAQUS/Standard software. Strain inhomogeneity is presented and discussed for all cases. Pattern of strain variation along selected radial lines in the body of the workpiece is presented. It is found from the results that the outer corner has a significant influence on the strain distribution in the body of work-piece. Based on inhomogeneity and average strain criteria, there is an optimum outer corner radius.Keywords: Equal Channel Angular Pressing, Finite Element Analysis, strain inhomogeneity, plastic equivalent strain, ultra fine grain size, aluminium alloy 6101.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247350 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340349 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct approaches for soil-structure interaction (SSI) analysis in the time domain. In the substructure approach, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the coupled soil-structure system. To explore the potential limitations of the substructure modeling process, a two-dimensional (2D) reinforced concrete frame structure is modeled and analyzed using the direct and substructure approaches. The results show discrepancy between the simulated responses of the direct and substructure models. It is concluded that the main source of discrepancy is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall alternatively be developed. This refined impedance function is expected to improve the simulation accuracy of the substructure approach.
Keywords: Direct approach, impedance function, massless rigid foundation, soil-structure interaction, substructure approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467348 Stochastic Subspace Modelling of Turbulence
Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen
Abstract:
Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646347 A Comparative Study of Global Power Grids and Global Fossil Energy Pipelines Using GIS Technology
Authors: Wenhao Wang, Xinzhi Xu, Limin Feng, Wei Cong
Abstract:
This paper comprehensively investigates current development status of global power grids and fossil energy pipelines (oil and natural gas), proposes a standard visual platform of global power and fossil energy based on Geographic Information System (GIS) technology. In this visual platform, a series of systematic visual models is proposed with global spatial data, systematic energy and power parameters. Under this visual platform, the current Global Power Grids Map and Global Fossil Energy Pipelines Map are plotted within more than 140 countries and regions across the world. Using the multi-scale fusion data processing and modeling methods, the world’s global fossil energy pipelines and power grids information system basic database is established, which provides important data supporting global fossil energy and electricity research. Finally, through the systematic and comparative study of global fossil energy pipelines and global power grids, the general status of global fossil energy and electricity development are reviewed, and energy transition in key areas are evaluated and analyzed. Through the comparison analysis of fossil energy and clean energy, the direction of relevant research is pointed out for clean development and energy transition.Keywords: Energy Transition, geographic information system, fossil energy, power systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966346 Preliminary Geophysical Assessment of Soil Contaminants around Wacot Rice Factory Argungu, North-Western Nigeria
Authors: A. I. Augie, Y. Alhassan, U. Z. Magawata
Abstract:
Geophysical investigation was carried out at wacot rice factory Argungu north-western Nigeria, using the 2D electrical resistivity method. The area falls between latitude 12˚44′23ʺN to 12˚44′50ʺN and longitude 4032′18′′E to 4032′39′′E covering a total area of about 1.85 km. Two profiles were carried out with Wenner configuration using resistivity meter (Ohmega). The data obtained from the study area were modeled using RES2DIVN software which gave an automatic interpretation of the apparent resistivity data. The inverse resistivity models of the profiles show the high resistivity values ranging from 208 Ωm to 651 Ωm. These high resistivity values in the overburden were due to dryness and compactness of the strata that lead to consolidation, which is an indication that the area is free from leachate contaminations. However, from the inverse model, there are regions of low resistivity values (1 Ωm to 18 Ωm), these zones were observed and identified as clayey and the most contaminated zones. The regions of low resistivity thereby indicated the leachate plume or the highly leachate concentrated zones due to similar resistivity values in both clayey and leachate. The regions of leachate are mainly from the factory into the surrounding area and its groundwater. The maximum leachate infiltration was found at depths 1 m to 15.9 m (P1) and 6 m to 15.9 m (P2) vertically, as well as distance along the profiles from 67 m to 75 m (P1), 155 m to 180 m (P1), and 115 m to 192 m (P2) laterally.Keywords: Contaminant, leachate, soil, groundwater, 2D, electrical, resistivity, Argungu.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577