Search results for: approximate bayesian computation
494 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building
Authors: Abdul Hakim Chikho
Abstract:
Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.Keywords: earthquake, fundamental mode period, design, building
Procedia PDF Downloads 284493 The Creative Unfolding of “Reduced Descriptive Structures” in Musical Cognition: Technical and Theoretical Insights Based on the OpenMusic and PWGL Long-Term Feedback
Authors: Jacopo Baboni Schilingi
Abstract:
We here describe the theoretical and philosophical understanding of a long term use and development of algorithmic computer-based tools applied to music composition. The findings of our research lead us to interrogate some specific processes and systems of communication engaged in the discovery of specific cultural artworks: artistic creation in the sono-musical domain. Our hypothesis is that the patterns of auditory learning cannot be only understood in terms of social transmission but would gain to be questioned in the way they rely on various ranges of acoustic stimuli modes of consciousness and how the different types of memories engaged in the percept-action expressive systems of our cultural communities also relies on these shadowy conscious entities we named “Reduced Descriptive Structures”.Keywords: algorithmic sonic computation, corrected and self-correcting learning patterns in acoustic perception, morphological derivations in sensorial patterns, social unconscious modes of communication
Procedia PDF Downloads 154492 Analyzing the Impact of Migration on HIV and AIDS Incidence Cases in Malaysia
Authors: Ofosuhene O. Apenteng, Noor Azina Ismail
Abstract:
The human immunodeficiency virus (HIV) that causes acquired immune deficiency syndrome (AIDS) remains a global cause of morbidity and mortality. It has caused panic since its emergence. Relationships between migration and HIV/AIDS have become complex. In the absence of prospectively designed studies, dynamic mathematical models that take into account the migration movement which will give very useful information. We have explored the utility of mathematical models in understanding transmission dynamics of HIV and AIDS and in assessing the magnitude of how migration has impact on the disease. The model was calibrated to HIV and AIDS incidence data from Malaysia Ministry of Health from the period of 1986 to 2011 using Bayesian analysis with combination of Markov chain Monte Carlo method (MCMC) approach to estimate the model parameters. From the estimated parameters, the estimated basic reproduction number was 22.5812. The rate at which the susceptible individual moved to HIV compartment has the highest sensitivity value which is more significant as compared to the remaining parameters. Thus, the disease becomes unstable. This is a big concern and not good indicator from the public health point of view since the aim is to stabilize the epidemic at the disease-free equilibrium. However, these results suggest that the government as a policy maker should make further efforts to curb illegal activities performed by migrants. It is shown that our models reflect considerably the dynamic behavior of the HIV/AIDS epidemic in Malaysia and eventually could be used strategically for other countries.Keywords: epidemic model, reproduction number, HIV, MCMC, parameter estimation
Procedia PDF Downloads 366491 FPGA Implementation of Novel Triangular Systolic Array Based Architecture for Determining the Eigenvalues of Matrix
Authors: Soumitr Sanjay Dubey, Shubhajit Roy Chowdhury, Rahul Shrestha
Abstract:
In this paper, we have presented a novel approach of calculating eigenvalues of any matrix for the first time on Field Programmable Gate Array (FPGA) using Triangular Systolic Arra (TSA) architecture. Conventionally, additional computation unit is required in the architecture which is compliant to the algorithm for determining the eigenvalues and this in return enhances the delay and power consumption. However, recently reported works are only dedicated for symmetric matrices or some specific case of matrix. This works presents an architecture to calculate eigenvalues of any matrix based on QR algorithm which is fully implementable on FPGA. For the implementation of QR algorithm we have used TSA architecture, which is further utilising CORDIC (CO-ordinate Rotation DIgital Computer) algorithm, to calculate various trigonometric and arithmetic functions involved in the procedure. The proposed architecture gives an error in the range of 10−4. Power consumption by the design is 0.598W. It can work at the frequency of 900 MHz.Keywords: coordinate rotation digital computer, three angle complex rotation, triangular systolic array, QR algorithm
Procedia PDF Downloads 415490 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models
Authors: Suriya
Abstract:
Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar
Procedia PDF Downloads 48489 Effect of Loose Bonding and Corrugated Boundary Surface on Propagation of Rayleigh-Type Wave
Authors: Kshitish Ch. Mistri, Abhishek Kumar Singh
Abstract:
The effect of undulatory boundary surface of a medium as well as the degree of bonding between two consecutive mediums, on the propagation of surface waves is an unavoidable matter of fact. Therefore, this paper investigates the propagation of Rayleigh-type wave in a corrugated fibre-reinforced layer overlying an initially stressed orthotropic half-space under gravity. Also, the two mediums are assumed to be loosely (or imperfectly) bonded. Numerical computation of the obtained frequency equation has been carried out which aids to analyze the influence of corrugation, loose bonding, initial stress and gravity on the phase velocity of Rayleigh-type wave. Moreover, the presence and absence of corrugation, loose bonding and initial stress are also discussed in a comparative manner.Keywords: corrugated boundary surface, fibre-reinforced layer, initial stress, loose bonding, orthotropic half-space, Rayleigh-type wave
Procedia PDF Downloads 276488 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 150487 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness
Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers
Abstract:
The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning
Procedia PDF Downloads 286486 Review of Dielectric Permittivity Measurement Techniques
Authors: Ahmad H. Abdelgwad, Galal E. Nadim, Tarek M. Said, Amr M. Gody
Abstract:
The prime objective of this manuscript is to provide intensive review of the techniques used for permittivity measurements. The measurement techniques, relevant for any desired application, rely on the nature of the measured dielectric material, both electrically and physically, the degree of accuracy required, and the frequency of interest. Regardless of the way that distinctive sorts of instruments can be utilized, measuring devices that provide reliable determinations of the required electrical properties including the obscure material in the frequency range of interest can be considered. The challenge in making precise dielectric property or permittivity measurements is in designing of the material specimen holder for those measurements (RF and MW frequency ranges) and adequately modeling the circuit for reliable computation of the permittivity from the electrical measurements. If the RF circuit parameters such as the impedance or admittance are estimated appropriately at a certain frequency, the material’s permittivity at this frequency can be estimated by the equations which relate the way in which the dielectric properties of the material affect on the parameters of the circuit.Keywords: dielectric permittivity, free space measurement, waveguide techniques, coaxial probe, cavity resonator
Procedia PDF Downloads 369485 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance
Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.
Abstract:
The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, PhilippinesKeywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure
Procedia PDF Downloads 101484 Nadler's Fixed Point Theorem on Partial Metric Spaces and its Application to a Homotopy Result
Authors: Hemant Kumar Pathak
Abstract:
In 1994, Matthews (S.G. Matthews, Partial metric topology, in: Proc. 8th Summer Conference on General Topology and Applications, in: Ann. New York Acad. Sci., vol. 728, 1994, pp. 183-197) introduced the concept of a partial metric as a part of the study of denotational semantics of data flow networks. He gave a modified version of the Banach contraction principle, more suitable in this context. In fact, (complete) partial metric spaces constitute a suitable framework to model several distinguished examples of the theory of computation and also to model metric spaces via domain theory. In this paper, we introduce the concept of almost partial Hausdorff metric. We prove a fixed point theorem for multi-valued mappings on partial metric space using the concept of almost partial Hausdorff metric and prove an analogous to the well-known Nadler’s fixed point theorem. In the sequel, we derive a homotopy result as an application of our main result.Keywords: fixed point, partial metric space, homotopy, physical sciences
Procedia PDF Downloads 441483 Computation of ΔV Requirements for Space Debris Removal Using Orbital Transfer
Authors: Sadhvi Gupta, Charulatha S.
Abstract:
Since the dawn of the early 1950s humans have launched numerous vehicles in space. Be it from rockets to rovers humans have done tremendous growth in the technology sector. While there is mostly upside for it for humans the only major downside which cannot be ignored now is the amount of junk produced in space due to it i.e. space debris. All this space junk amounts from objects we launch from earth which so remains in orbit until it re-enters the atmosphere. Space debris can be of various sizes mainly the big ones are of the dead satellites floating in space and small ones can consist of various things like paint flecks, screwdrivers, bolts etc. Tracking of small space debris whose size is less than 10 cm is impossible and can have vast implications. As the amount of space debris increases in space the chances of it hitting a functional satellite also increases. And it is extremely costly to repair or recover the satellite once hit by a revolving space debris. So the proposed solution is, Actively removing space debris while keeping space sustainability in mind. For this solution a total of 8 modules will be launched in LEO and in GEO and these models will be placed in their desired orbits through Hohmann transfer and for that calculating ΔV values is crucial. After which the modules will be placed in their designated positions in STK software and thorough analysis is conducted.Keywords: space debris, Hohmann transfer, STK, delta-V
Procedia PDF Downloads 86482 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016
Authors: Dimitra Alexiou
Abstract:
During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.Keywords: tourism, statistical methods, exponential smoothing, land spatial planning, economy
Procedia PDF Downloads 265481 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 467480 An Integrated Approach for Risk Management of Transportation of HAZMAT: Use of Quality Function Deployment and Risk Assessment
Authors: Guldana Zhigerbayeva, Ming Yang
Abstract:
Transportation of hazardous materials (HAZMAT) is inevitable in the process industries. The statistics show a significant number of accidents has occurred during the transportation of HAZMAT. This makes risk management of HAZMAT transportation an important topic. The tree-based methods including fault-trees, event-trees and cause-consequence analysis, and Bayesian network, have been applied to risk management of HAZMAT transportation. However, there is limited work on the development of a systematic approach. The existing approaches fail to build up the linkages between the regulatory requirements and the safety measures development. The analysis of historical data from the past accidents’ report databases would limit our focus on the specific incidents and their specific causes. Thus, we may overlook some essential elements in risk management, including regulatory compliance, field expert opinions, and suggestions. A systematic approach is needed to translate the regulatory requirements of HAZMAT transportation into specified safety measures (both technical and administrative) to support the risk management process. This study aims to first adapt the House of Quality (HoQ) to House of Safety (HoS) and proposes a new approach- Safety Function Deployment (SFD). The results of SFD will be used in a multi-criteria decision-support system to develop find an optimal route for HazMats transportation. The proposed approach will be demonstrated through a hypothetical transportation case in Kazakhstan.Keywords: hazardous materials, risk assessment, risk management, quality function deployment
Procedia PDF Downloads 141479 SA-SPKC: Secure and Efficient Aggregation Scheme for Wireless Sensor Networks Using Stateful Public Key Cryptography
Authors: Merad Boudia Omar Rafik, Feham Mohammed
Abstract:
Data aggregation in wireless sensor networks (WSNs) provides a great reduction of energy consumption. The limited resources of sensor nodes make the choice of an encryption algorithm very important for providing security for data aggregation. Asymmetric cryptography involves large ciphertexts and heavy computations but solves, on the other hand, the problem of key distribution of symmetric one. The latter provides smaller ciphertexts and speed computations. Also, the recent researches have shown that achieving the end-to-end confidentiality and the end-to-end integrity at the same is a challenging task. In this paper, we propose (SA-SPKC), a novel security protocol which addresses both security services for WSNs, and where only the base station can verify the individual data and identify the malicious node. Our scheme is based on stateful public key encryption (StPKE). The latter combines the best features of both kinds of encryption along with state in order to reduce the computation overhead. Our analysisKeywords: secure data aggregation, wireless sensor networks, elliptic curve cryptography, homomorphic encryption
Procedia PDF Downloads 297478 An Automatic Bayesian Classification System for File Format Selection
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.Keywords: data mining, digital libraries, digital preservation, file format
Procedia PDF Downloads 499477 Multi-Objective Random Drift Particle Swarm Optimization Algorithm Based on RDPSO and Crowding Distance Sorting
Authors: Yiqiong Yuan, Jun Sun, Dongmei Zhou, Jianan Sun
Abstract:
In this paper, we presented a Multi-Objective Random Drift Particle Swarm Optimization algorithm (MORDPSO-CD) based on RDPSO and crowding distance sorting to improve the convergence and distribution with less computation cost. MORDPSO-CD makes the most of RDPSO to approach the true Pareto optimal solutions fast. We adopt the crowding distance sorting technique to update and maintain the archived optimal solutions. Introducing the crowding distance technique into MORDPSO can make the leader particles find the true Pareto solution ultimately. The simulation results reveal that the proposed algorithm has better convergence and distributionKeywords: multi-objective optimization, random drift particle swarm optimization, crowding distance sorting, pareto optimal solution
Procedia PDF Downloads 255476 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories
Procedia PDF Downloads 283475 Ambivalence as Ethical Practice: Methodologies to Address Noise, Bias in Care, and Contact Evaluations
Authors: Anthony Townsend, Robyn Fasser
Abstract:
While complete objectivity is a desirable scientific position from which to conduct a care and contact evaluation (CCE), it is precisely the recognition that we are inherently incapable of operating objectively that is the foundation of ethical practice and skilled assessment. Drawing upon recent research from Daniel Kahneman (2021) on the differences between noise and bias, as well as different inherent biases collectively termed “The Elephant in the Brain” by Kevin Simler and Robin Hanson (2019) from Oxford University, this presentation addresses both the various ways in which our judgments, perceptions and even procedures can be distorted and contaminated while conducting a CCE, but also considers the value of second order cybernetics and the psychodynamic concept of ‘ambivalence’ as a conceptual basis to inform our assessment methodologies to limit such errors or at least better identify them. Both a conceptual framework for ambivalence, our higher-order capacity to allow for the convergence and consideration of multiple emotional experiences and cognitive perceptions to inform our reasoning, and a practical methodology for assessment relying on data triangulation, Bayesian inference and hypothesis testing is presented as a means of promoting ethical practice for health care professionals conducting CCEs. An emphasis on widening awareness and perspective, limiting ‘splitting’, is demonstrated both in how this form of emotional processing plays out in alienating dynamics in families as well as the assessment thereof. In addressing this concept, this presentation aims to illuminate the value of ambivalence as foundational to ethical practice for assessors.Keywords: ambivalence, forensic, psychology, noise, bias, ethics
Procedia PDF Downloads 86474 Feasibility Study of Wind Energy Potential in Turkey: Case Study of Catalca District in Istanbul
Authors: Mohammed Wadi, Bedri Kekezoglu, Mustafa Baysal, Mehmet Rida Tur, Abdulfetah Shobole
Abstract:
This paper investigates the technical evaluation of the wind potential for present and future investments in Turkey taking into account the feasibility of sites, installments, operation, and maintenance. This evaluation based on the hourly measured wind speed data for the three years 2008–2010 at 30 m height for Çatalca district. These data were obtained from national meteorology station in Istanbul–Republic of Turkey are analyzed in order to evaluate the feasibility of wind power potential and to assure supreme assortment of wind turbines installing for the area of interest. Furthermore, the data are extrapolated and analyzed at 60 m and 80 m regarding the variability of roughness factor. Weibull bi-parameter probability function is used to approximate monthly and annually wind potential and power density based on three calculation methods namely, the approximated, the graphical and the energy pattern factor methods. The annual mean wind power densities were to be 400.31, 540.08 and 611.02 W/m² for 30, 60, and 80 m heights respectively. Simulation results prove that the analyzed area is an appropriate place for constructing large-scale wind farms.Keywords: wind potential in Turkey, Weibull bi-parameter probability function, the approximated method, the graphical method, the energy pattern factor method, capacity factor
Procedia PDF Downloads 259473 Integration GIS–SCADA Power Systems to Enclosure Air Dispersion Model
Authors: Ibrahim Shaker, Amr El Hossany, Moustafa Osman, Mohamed El Raey
Abstract:
This paper will explore integration model between GIS–SCADA system and enclosure quantification model to approach the impact of failure-safe event. There are real demands to identify spatial objects and improve control system performance. Nevertheless, the employed methodology is predicting electro-mechanic operations and corresponding time to environmental incident variations. Open processing, as object systems technology, is presented for integration enclosure database with minimal memory size and computation time via connectivity drivers such as ODBC:JDBC during main stages of GIS–SCADA connection. The function of Geographic Information System is manipulating power distribution in contrast to developing issues. In other ward, GIS-SCADA systems integration will require numerical objects of process to enable system model calibration and estimation demands, determine of past events for analysis and prediction of emergency situations for response training.Keywords: air dispersion model, environmental management, SCADA systems, GIS system, integration power system
Procedia PDF Downloads 369472 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 139471 Constructing White-Box Implementations Based on Threshold Shares and Composite Fields
Authors: Tingting Lin, Manfred von Willich, Dafu Lou, Phil Eisen
Abstract:
A white-box implementation of a cryptographic algorithm is a software implementation intended to resist extraction of the secret key by an adversary. To date, most of the white-box techniques are used to protect block cipher implementations. However, a large proportion of the white-box implementations are proven to be vulnerable to affine equivalence attacks and other algebraic attacks, as well as differential computation analysis (DCA). In this paper, we identify a class of block ciphers for which we propose a method of constructing white-box implementations. Our method is based on threshold implementations and operations in composite fields. The resulting implementations consist of lookup tables and few exclusive OR operations. All intermediate values (inputs and outputs of the lookup tables) are masked. The threshold implementation makes the distribution of the masked values uniform and independent of the original inputs, and the operations in composite fields reduce the size of the lookup tables. The white-box implementations can provide resistance against algebraic attacks and DCA-like attacks.Keywords: white-box, block cipher, composite field, threshold implementation
Procedia PDF Downloads 168470 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 174469 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 33468 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints
Authors: Zongjie Wang, Zhizhong Guo
Abstract:
While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.Keywords: optimal power flow, time period, security, economy
Procedia PDF Downloads 451467 One-Dimension Model for Positive Displacement Pump with Cavitation Algorithm
Authors: Francesco Rizzuto, Matthew Stickland, Stephan Hannot
Abstract:
The simulation of a positive displacement pump system with commercial software for Computer Fluid Dynamics (CFD), will result in an enormous computational effort due to the complexity of the pump system. This drawback restricts the use of it to a specific part of the pump in one simulation. This research focuses on developing an algorithm that provides a suitable result in agreement with experiment data, without that computational effort. The compressible equations are solved with an explicit algorithm. A comparison is presented between the FV method with Monotonic Upwind scheme for Conservative Laws (MUSCL) with slope limiter and experimental results. The source term for cavitation and friction is introduced into the algorithm with a slipping strategy and solved with a 4th order Runge-Kutta scheme (RK4). Different pumps are modeled and analyzed to evaluate the flexibility of the code. The simulation required minimal computation time and resources without compromising the accuracy of the simulation results. Therefore, this algorithm highlights the feasibility of pressure pulsation simulation as a design tool for an industrial purpose.Keywords: cavitation, diaphragm, DVCM, finite volume, MUSCL, positive displacement pump
Procedia PDF Downloads 155466 Fast and Accurate Model to Detect Ictal Waveforms in Electroencephalogram Signals
Authors: Piyush Swami, Bijaya Ketan Panigrahi, Sneh Anand, Manvir Bhatia, Tapan Gandhi
Abstract:
Visual inspection of electroencephalogram (EEG) signals to detect epileptic signals is very challenging and time-consuming task even for any expert neurophysiologist. This problem is most challenging in under-developed and developing countries due to shortage of skilled neurophysiologists. In the past, notable research efforts have gone in trying to automate the seizure detection process. However, due to high false alarm detections and complexity of the models developed so far, have vastly delimited their practical implementation. In this paper, we present a novel scheme for epileptic seizure detection using empirical mode decomposition technique. The intrinsic mode functions obtained were then used to calculate the standard deviations. This was followed by probability density based classifier to discriminate between non-ictal and ictal patterns in EEG signals. The model presented here demonstrated very high classification rates ( > 97%) without compromising the statistical performance. The computation timings for each testing phase were also very low ( < 0.029 s) which makes this model ideal for practical applications.Keywords: electroencephalogram (EEG), epilepsy, ictal patterns, empirical mode decomposition
Procedia PDF Downloads 406465 Traction Behavior of Linear Piezo-Viscous Lubricants in Rough Elastohydrodynamic Lubrication Contacts
Authors: Punit Kumar, Niraj Kumar
Abstract:
The traction behavior of lubricants with the linear pressure-viscosity response in EHL line contacts is investigated numerically for smooth as well as rough surfaces. The analysis involves the simultaneous solution of Reynolds, elasticity and energy equations along with the computation of lubricant properties and surface temperatures. The temperature modified Doolittle-Tait equations are used to calculate viscosity and density as functions of fluid pressure and temperature, while Carreau model is used to describe the lubricant rheology. The surface roughness is assumed to be sinusoidal and it is present on the nearly stationary surface in near-pure sliding EHL conjunction. The linear P-V oil is found to yield much lower traction coefficients and slightly thicker EHL films as compared to the synthetic oil for a given set of dimensionless speed and load parameters. Besides, the increase in traction coefficient attributed to surface roughness is much lower for the former case. The present analysis emphasizes the importance of employing realistic pressure-viscosity response for accurate prediction of EHL traction.Keywords: EHL, linear pressure-viscosity, surface roughness, traction, water/glycol
Procedia PDF Downloads 382