Search results for: time prediction algorithms
19673 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 39019672 Predicting Bridge Pier Scour Depth with SVM
Authors: Arun Goel
Abstract:
Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)
Procedia PDF Downloads 45119671 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 20819670 A Study on Performance Prediction in Early Design Stage of Apartment Housing Using Machine Learning
Authors: Seongjun Kim, Sanghoon Shim, Jinwooung Kim, Jaehwan Jung, Sung-Ah Kim
Abstract:
As the development of information and communication technology, the convergence of machine learning of the ICT area and design is attempted. In this way, it is possible to grasp the correlation between various design elements, which was difficult to grasp, and to reflect this in the design result. In architecture, there is an attempt to predict the performance, which is difficult to grasp in the past, by finding the correlation among multiple factors mainly through machine learning. In architectural design area, some attempts to predict the performance affected by various factors have been tried. With machine learning, it is possible to quickly predict performance. The aim of this study is to propose a model that predicts performance according to the block arrangement of apartment housing through machine learning and the design alternative which satisfies the performance such as the daylight hours in the most similar form to the alternative proposed by the designer. Through this study, a designer can proceed with the design considering various design alternatives and accurate performances quickly from the early design stage.Keywords: apartment housing, machine learning, multi-objective optimization, performance prediction
Procedia PDF Downloads 48119669 Prediction of Heavy-Weight Impact Noise and Vibration of Floating Floor Using Modified Impact Spectrum
Authors: Ju-Hyung Kim, Dae-Ho Mun, Hong-Gun Park
Abstract:
When an impact is applied to a floating floor, noise and vibration response of high-frequency range is reduced effectively, while amplifies the response at low-frequency range. This means floating floor can make worse noise condition when heavy-weight impact is applied. The amplified response is the result of interaction between finishing layer (mortar plate) and concrete slab. Because an impact force is not directly delivered to concrete slab, the impact force waveform or spectrum can be changed. In this paper, the changed impact spectrum was derived from several floating floor vibration tests. Based on the measured data, numerical modeling can describe the floating floor response, especially at low-frequency range. As a result, heavy-weight impact noise can be predicted using modified impact spectrum.Keywords: floating floor, heavy-weight impact, prediction, vibration
Procedia PDF Downloads 37219668 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters
Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini
Abstract:
The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.Keywords: curcumin, HSPs, prediction, solvates, solubility
Procedia PDF Downloads 6319667 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems
Authors: Zahid Ullah, Atlas Khan
Abstract:
This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms
Procedia PDF Downloads 11219666 An Alteration of the Boltzmann Superposition Principle to Account for Environmental Degradation in Fiber Reinforced Plastics
Authors: Etienne K. Ngoy
Abstract:
This analysis suggests that the comprehensive degradation caused by any environmental factor on fiber reinforced plastics under mechanical stress can be measured as a change in viscoelastic properties of the material. The change in viscoelastic characteristics is experimentally determined as a time-dependent function expressing the amplification of the stress relaxation. The variation of this experimental function provides a measure of the environmental degradation rate. Where real service environment conditions can be reliably simulated in the laboratory, it is possible to generate master curves that include environmental degradation effect and hence predict the durability of the fiber reinforced plastics under environmental degradation.Keywords: environmental effects, fiber reinforced plastics durability, prediction, stress effect
Procedia PDF Downloads 19219665 Prediction of in situ Permeability for Limestone Rock Using Rock Quality Designation Index
Authors: Ahmed T. Farid, Muhammed Rizwan
Abstract:
Geotechnical study for evaluating soil or rock permeability is a highly important parameter. Permeability values for rock formations are more difficult for determination than soil formation as it is an effect of the rock quality and its fracture values. In this research, the prediction of in situ permeability of limestone rock formations was predicted. The limestone rock permeability was evaluated using Lugeon tests (in-situ packer permeability). Different sites which spread all over the Riyadh region of Saudi Arabia were chosen to conduct our study of predicting the in-situ permeability of limestone rock. Correlations were deducted between the values of in-situ permeability of the limestone rock with the value of the rock quality designation (RQD) calculated during the execution of the boreholes of the study areas. The study was performed for different ranges of RQD values measured during drilling of the sites boreholes. The developed correlations are recommended for the onsite determination of the in-situ permeability of limestone rock only. For the other sedimentary formations of rock, more studies are needed for predicting the actual correlations related to each type.Keywords: In situ, packer, permeability, rock, quality
Procedia PDF Downloads 37219664 Algorithm for Information Retrieval Optimization
Authors: Kehinde K. Agbele, Kehinde Daniel Aruleba, Eniafe F. Ayetiran
Abstract:
When using Information Retrieval Systems (IRS), users often present search queries made of ad-hoc keywords. It is then up to the IRS to obtain a precise representation of the user’s information need and the context of the information. This paper investigates optimization of IRS to individual information needs in order of relevance. The study addressed development of algorithms that optimize the ranking of documents retrieved from IRS. This study discusses and describes a Document Ranking Optimization (DROPT) algorithm for information retrieval (IR) in an Internet-based or designated databases environment. Conversely, as the volume of information available online and in designated databases is growing continuously, ranking algorithms can play a major role in the context of search results. In this paper, a DROPT technique for documents retrieved from a corpus is developed with respect to document index keywords and the query vectors. This is based on calculating the weight (Keywords: information retrieval, document relevance, performance measures, personalization
Procedia PDF Downloads 24119663 Computational Study of Flow and Heat Transfer Characteristics of an Incompressible Fluid in a Channel Using Lattice Boltzmann Method
Authors: Imdat Taymaz, Erman Aslan, Kemal Cakir
Abstract:
The Lattice Boltzmann Method (LBM) is performed to computationally investigate the laminar flow and heat transfer of an incompressible fluid with constant material properties in a 2D channel with a built-in triangular prism. Both momentum and energy transport is modelled by the LBM. A uniform lattice structure with a single time relaxation rule is used. Interpolation methods are applied for obtaining a higher flexibility on the computational grid, where the information is transferred from the lattice structure to the computational grid by Lagrange interpolation. The flow is researched on for different Reynolds number, while Prandtl number is keeping constant as a 0.7. The results show how the presence of a triangular prism effects the flow and heat transfer patterns for the steady-state and unsteady-periodic flow regimes. As an evaluation of the accuracy of the developed LBM code, the results are compared with those obtained by a commercial CFD code. It is observed that the present LBM code produces results that have similar accuracy with the well-established CFD code, as an additionally, LBM needs much smaller CPU time for the prediction of the unsteady phonema.Keywords: laminar forced convection, lbm, triangular prism
Procedia PDF Downloads 37319662 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 4719661 An Enhanced Floor Estimation Algorithm for Indoor Wireless Localization Systems Using Confidence Interval Approach
Authors: Kriangkrai Maneerat, Chutima Prommak
Abstract:
Indoor wireless localization systems have played an important role to enhance context-aware services. Determining the position of mobile objects in complex indoor environments, such as those in multi-floor buildings, is very challenging problems. This paper presents an effective floor estimation algorithm, which can accurately determine the floor where mobile objects located. The proposed algorithm is based on the confidence interval of the summation of online Received Signal Strength (RSS) obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSN). We compare the performance of the proposed algorithm with those of other floor estimation algorithms in literature by conducting a real implementation of WSN in our facility. The experimental results and analysis showed that the proposed floor estimation algorithm outperformed the other algorithms and provided highest percentage of floor accuracy up to 100% with 95-percent confidence interval.Keywords: floor estimation algorithm, floor determination, multi-floor building, indoor wireless systems
Procedia PDF Downloads 41819660 Development of Terrorist Threat Prediction Model in Indonesia by Using Bayesian Network
Authors: Hilya Mudrika Arini, Nur Aini Masruroh, Budi Hartono
Abstract:
There are more than 20 terrorist threats from 2002 to 2012 in Indonesia. Despite of this fact, preventive solution through studies in the field of national security in Indonesia has not been conducted comprehensively. This study aims to provide a preventive solution by developing prediction model of the terrorist threat in Indonesia by using Bayesian network. There are eight stages to build the model, started from literature review, build and verify Bayesian belief network to what-if scenario. In order to build the model, four experts from different perspectives are utilized. This study finds several significant findings. First, news and the readiness of terrorist group are the most influent factor. Second, according to several scenarios of the news portion, it can be concluded that the higher positive news proportion, the higher probability of terrorist threat will occur. Therefore, the preventive solution to reduce the terrorist threat in Indonesia based on the model is by keeping the positive news portion to a maximum of 38%.Keywords: Bayesian network, decision analysis, national security system, text mining
Procedia PDF Downloads 39219659 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 12819658 Breast Cancer Detection Using Machine Learning Algorithms
Authors: Jiwan Kumar, Pooja, Sandeep Negi, Anjum Rouf, Amit Kumar, Naveen Lakra
Abstract:
In modern times where, health issues are increasing day by day, breast cancer is also one of them, which is very crucial and really important to find in the early stages. Doctors can use this model in order to tell their patients whether a cancer is not harmful (benign) or harmful (malignant). We have used the knowledge of machine learning in order to produce the model. we have used algorithms like Logistic Regression, Random forest, support Vector Classifier, Bayesian Network and Radial Basis Function. We tried to use the data of crucial parts and show them the results in pictures in order to make it easier for doctors. By doing this, we're making ML better at finding breast cancer, which can lead to saving more lives and better health care.Keywords: Bayesian network, radial basis function, ensemble learning, understandable, data making better, random forest, logistic regression, breast cancer
Procedia PDF Downloads 5319657 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building
Authors: Aaditya U. Jhamb
Abstract:
Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.Keywords: energy efficient buildings, heating load, cooling load, machine learning models
Procedia PDF Downloads 9619656 On the Study of All Waterloo Automaton Semilattices
Authors: Mikhail Abramyan, Boris Melnikov
Abstract:
The aim is to study the set of subsets of grids of the Waterloo automaton and the set of covering automata defined by the grid subsets. The study was carried out using the library for working with nondeterministic finite automata NFALib implemented by one of the authors (M. Abramyan) in C#. The results are regularities obtained when considering semilattices of covering automata for the Waterloo automaton. A complete description of the obtained semilattices from the point of view of equivalence of the covering automata to the original Waterloo automaton is given, the criterion of equivalence of the covering automaton to the Waterloo automaton in terms of properties of the subset of grids defining the covering automaton is formulated. The relevance of the subject area under consideration is due to the need to research a set of regular languages and, in particular, a description of their various subclasses. Also relevant are the problems that may arise in some subclasses. This will give, among other things, the possibility of describing new algorithms for the equivalent transformation of nondeterministic finite automata.Keywords: nondeterministic finite automata, universal automaton, grid, covering automaton, equivalent transformation algorithms, the Waterloo automaton
Procedia PDF Downloads 8719655 Energy System Analysis Using Data-Driven Modelling and Bayesian Methods
Authors: Paul Rowley, Adam Thirkill, Nick Doylend, Philip Leicester, Becky Gough
Abstract:
The dynamic performance of all energy generation technologies is impacted to varying degrees by the stochastic properties of the wider system within which the generation technology is located. This stochasticity can include the varying nature of ambient renewable energy resources such as wind or solar radiation, or unpredicted changes in energy demand which impact upon the operational behaviour of thermal generation technologies. An understanding of these stochastic impacts are especially important in contexts such as highly distributed (or embedded) generation, where an understanding of issues affecting the individual or aggregated performance of high numbers of relatively small generators is especially important, such as in ESCO projects. Probabilistic evaluation of monitored or simulated performance data is one technique which can provide an insight into the dynamic performance characteristics of generating systems, both in a prognostic sense (such as the prediction of future performance at the project’s design stage) as well as in a diagnostic sense (such as in the real-time analysis of underperforming systems). In this work, we describe the development, application and outcomes of a new approach to the acquisition of datasets suitable for use in the subsequent performance and impact analysis (including the use of Bayesian approaches) for a number of distributed generation technologies. The application of the approach is illustrated using a number of case studies involving domestic and small commercial scale photovoltaic, solar thermal and natural gas boiler installations, and the results as presented show that the methodology offers significant advantages in terms of plant efficiency prediction or diagnosis, along with allied environmental and social impacts such as greenhouse gas emission reduction or fuel affordability.Keywords: renewable energy, dynamic performance simulation, Bayesian analysis, distributed generation
Procedia PDF Downloads 49519654 An Intrusion Detection Systems Based on K-Means, K-Medoids and Support Vector Clustering Using Ensemble
Authors: A. Mohammadpour, Ebrahim Najafi Kajabad, Ghazale Ipakchi
Abstract:
Presently, computer networks’ security rise in importance and many studies have also been conducted in this field. By the penetration of the internet networks in different fields, many things need to be done to provide a secure industrial and non-industrial network. Fire walls, appropriate Intrusion Detection Systems (IDS), encryption protocols for information sending and receiving, and use of authentication certificated are among things, which should be considered for system security. The aim of the present study is to use the outcome of several algorithms, which cause decline in IDS errors, in the way that improves system security and prevents additional overload to the system. Finally, regarding the obtained result we can also detect the amount and percentage of more sub attacks. By running the proposed system, which is based on the use of multi-algorithmic outcome and comparing that by the proposed single algorithmic methods, we observed a 78.64% result in attack detection that is improved by 3.14% than the proposed algorithms.Keywords: intrusion detection systems, clustering, k-means, k-medoids, SV clustering, ensemble
Procedia PDF Downloads 22119653 Prediction Modeling of Compression Properties of a Knitted Sportswear Fabric Using Response Surface Method
Authors: Jawairia Umar, Tanveer Hussain, Zulfiqar Ali, Muhammad Maqsood
Abstract:
Different knitted structures and knitted parameters play a vital role in the stretch and recovery management of compression sportswear in addition to the materials use to generate this stretch and recovery behavior of the fabric. The present work was planned to predict the different performance indicators of a compression sportswear fabric with some ground parameters i.e. base yarn stitch length (polyester as base yarn and spandex as plating yarn involve to make a compression fabric) and linear density of the spandex which is a key material of any sportswear fabric. The prediction models were generated by response surface method for performance indicators such as stretch & recovery percentage, compression generated by the garment on body, total elongation on application of high power force and load generated on certain percentage extension in fabric. Certain physical properties of the fabric were also modeled using these two parameters.Keywords: Compression, sportswear, stretch and recovery, statistical model, kikuhime
Procedia PDF Downloads 37919652 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma
Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren
Abstract:
We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values
Procedia PDF Downloads 15419651 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness
Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers
Abstract:
The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning
Procedia PDF Downloads 28619650 On Block Vandermonde Matrix Constructed from Matrix Polynomial Solvents
Authors: Malika Yaici, Kamel Hariche
Abstract:
In control engineering, systems described by matrix fractions are studied through properties of block roots, also called solvents. These solvents are usually dealt with in a block Vandermonde matrix form. Inverses and determinants of Vandermonde matrices and block Vandermonde matrices are used in solving problems of numerical analysis in many domains but require costly computations. Even though Vandermonde matrices are well known and method to compute inverse and determinants are many and, generally, based on interpolation techniques, methods to compute the inverse and determinant of a block Vandermonde matrix have not been well studied. In this paper, some properties of these matrices and iterative algorithms to compute the determinant and the inverse of a block Vandermonde matrix are given. These methods are deducted from the partitioned matrix inversion and determinant computing methods. Due to their great size, parallelization may be a solution to reduce the computations cost, so a parallelization of these algorithms is proposed and validated by a comparison using algorithmic complexity.Keywords: block vandermonde matrix, solvents, matrix polynomial, matrix inverse, matrix determinant, parallelization
Procedia PDF Downloads 24019649 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 38519648 A Systems-Level Approach towards Transition to Electrical Vehicles
Authors: Mayuri Roy Choudhury, Deepti Paul
Abstract:
Many states in the United States are aiming for high renewable energy targets by the year 2045. In order to achieve this goal, they must do transition to Electrical Vehicles (EVS). We first applied the Multi-Level perspective framework to describe the inter-disciplinary complexities associated with the transition to EVs. Thereafter we addressed these complexities by creating an inter-disciplinary policy framework that uses data science algorithms to create evidence-based policies in favor of EVs. Our policy framework uses a systems level approach as it addresses transitions to EVs from a technology, economic, business and social perspective. By Systems-Level we mean approaching a problem from a multi-disciplinary perspective. Our systems-level approach could be a beneficial decision-making tool to a diverse number of stakeholders such as engineers, entrepreneurs, researchers, and policymakers. In addition, it will add value to the literature of electrical vehicles, sustainable energy, energy economics, and management as well as efficient policymaking.Keywords: transition, electrical vehicles, systems-level, algorithms
Procedia PDF Downloads 22819647 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis
Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu
Abstract:
In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.Keywords: supervised, functional principal component analysis, functional response, functional linear regression
Procedia PDF Downloads 7519646 Control of a Quadcopter Using Genetic Algorithm Methods
Authors: Mostafa Mjahed
Abstract:
This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system
Procedia PDF Downloads 43119645 Wind Turbine Wake Prediction and Validation under a Stably-Stratified Atmospheric Boundary Layer
Authors: Yilei Song, Linlin Tian, Ning Zhao
Abstract:
Turbulence energetics and structures in the wake of large-scale wind turbines under the stably-stratified atmospheric boundary layer (SABL) can be complicated due to the presence of low-level jets (LLJs), a region of higher wind speeds than the geostrophic wind speed. With a modified one-k-equation, eddy viscosity model specified for atmospheric flows as the sub-grid scale (SGS) model, a realistic atmospheric state of the stable ABL is well reproduced by large-eddy simulation (LES) techniques. Corresponding to the precursor stably stratification, the detailed wake properties of a standard 5-MW wind turbine represented as an actuator line model are provided. An engineering model is proposed for wake prediction based on the simulation statistics and gets validated. Results confirm that the proposed wake model can provide good predictions for wind turbines under the SABL.Keywords: large-eddy simulation, stably-stratified atmospheric boundary layer, wake model, wind turbine wake
Procedia PDF Downloads 17419644 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 151