Search results for: wind data
24991 Emerging Technology for Business Intelligence Applications
Authors: Hsien-Tsen Wang
Abstract:
Business Intelligence (BI) has long helped organizations make informed decisions based on data-driven insights and gain competitive advantages in the marketplace. In the past two decades, businesses witnessed not only the dramatically increasing volume and heterogeneity of business data but also the emergence of new technologies, such as Artificial Intelligence (AI), Semantic Web (SW), Cloud Computing, and Big Data. It is plausible that the convergence of these technologies would bring more value out of business data by establishing linked data frameworks and connecting in ways that enable advanced analytics and improved data utilization. In this paper, we first review and summarize current BI applications and methodology. Emerging technologies that can be integrated into BI applications are then discussed. Finally, we conclude with a proposed synergy framework that aims at achieving a more flexible, scalable, and intelligent BI solution.Keywords: business intelligence, artificial intelligence, semantic web, big data, cloud computing
Procedia PDF Downloads 9424990 In-Farm Wood Gasification Energy Micro-Generation System in Brazil: A Monte Carlo Viability Simulation
Authors: Erich Gomes Schaitza, Antônio Francisco Savi, Glaucia Aparecida Prates
Abstract:
The penetration of renewable energy into the electricity supply in Brazil is high, one of the highest in the World. Centralized hydroelectric generation is the main source of energy, followed by biomass and wind. Surprisingly, mini and micro-generation are negligible, with less than 2,000 connections to the national grid. In 2015, a new regulatory framework was put in place to change this situation. In the agricultural sector, the framework was complemented by the offer of low interest rate loans to in-farm renewable generation. Brazil proposed to more than double its area of planted forests as part of its INDC- Intended Nationally Determined Contributions to the UNFCCC-U.N. Framework Convention on Climate Change (UNFCCC). This is an ambitious target which will be achieved only if forests are attractive to farmers. Therefore, this paper analyses whether planting forests for in-farm energy generation with a with a woodchip gasifier is economically viable for microgeneration under the new framework and at if they could be an economic driver for forest plantation. At first, a static case was analyzed with data from Eucalyptus plantations in five farms. Then, a broader analysis developed with the use of Monte Carlo technique. Planting short rotation forests to generate energy could be a viable alternative and the low interest loans contribute to that. There are some barriers to such systems such as the inexistence of a mature market for small scale equipment and of a reference network of good practices and examples.Keywords: biomass, distribuited generation, small-scale, Monte Carlo
Procedia PDF Downloads 28424989 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 18524988 Ethics Can Enable Open Source Data Research
Authors: Dragana Calic
Abstract:
The openness, availability and the sheer volume of big data have provided, what some regard as, an invaluable and rich dataset. Researchers, businesses, advertising agencies, medical institutions, to name only a few, collect, share, and analyze this data to enable their processes and decision making. However, there are important ethical considerations associated with the use of big data. The rapidly evolving nature of online technologies has overtaken the many legislative, privacy, and ethical frameworks and principles that exist. For example, should we obtain consent to use people’s online data, and under what circumstances can privacy considerations be overridden? Current guidance on how to appropriately and ethically handle big data is inconsistent. Consequently, this paper focuses on two quite distinct but related ethical considerations that are at the core of the use of big data for research purposes. They include empowering the producers of data and empowering researchers who want to study big data. The first consideration focuses on informed consent which is at the core of empowering producers of data. In this paper, we discuss some of the complexities associated with informed consent and consider studies of producers’ perceptions to inform research ethics guidelines and practice. The second consideration focuses on the researcher. Similarly, we explore studies that focus on researchers’ perceptions and experiences.Keywords: big data, ethics, producers’ perceptions, researchers’ perceptions
Procedia PDF Downloads 28224987 Hybrid Reliability-Similarity-Based Approach for Supervised Machine Learning
Authors: Walid Cherif
Abstract:
Data mining has, over recent years, seen big advances because of the spread of internet, which generates everyday a tremendous volume of data, and also the immense advances in technologies which facilitate the analysis of these data. In particular, classification techniques are a subdomain of Data Mining which determines in which group each data instance is related within a given dataset. It is used to classify data into different classes according to desired criteria. Generally, a classification technique is either statistical or machine learning. Each type of these techniques has its own limits. Nowadays, current data are becoming increasingly heterogeneous; consequently, current classification techniques are encountering many difficulties. This paper defines new measure functions to quantify the resemblance between instances and then combines them in a new approach which is different from actual algorithms by its reliability computations. Results of the proposed approach exceeded most common classification techniques with an f-measure exceeding 97% on the IRIS Dataset.Keywords: data mining, knowledge discovery, machine learning, similarity measurement, supervised classification
Procedia PDF Downloads 46224986 Relation of Black Carbon Aerosols and Atmospheric Boundary Layer Height during Wet Removal Processes over a Semi Urban Location
Authors: M. Ashok Williams, T. V. Lakshmi Kumar
Abstract:
The life cycle of Black carbon aerosols depends on their physical removal processes from the atmosphere during the precipitation events. Black Carbon (BC) mass concentration has been analysed during rainy and non-rainy days of Northeast (NE) Monsoon months of the years 2015 and 2017 over a semi-urban environment near Chennai (12.81 N, 80.03 E), located on the east coast of India. BC, measured using an Aethalometer (AE-31) has been related to the atmospheric boundary layer height (BLH) obtained from the ERA Interim Reanalysis data during rainy and non-rainy days on monthly mean basis to understand the wet removal of BC over the study location. The study reveals that boundary layer height has a profound effect on the BC concentration on rainy days and non rainy days. It is found that the BC concentration in the night time is lower on rainy days compared to non rainy days owing to wash out on rainy days and the boundary layer height remaining nearly the same on rainy and non rainy days. On the other hand, in the daytime, it is found that the BC concentration remains nearly the same on rainy and non rainy days whereas the boundary layer height is lower on rainy days compared to non rainy days. This reveals that in daytime, lower boundary layer heights compensate for the wet removal effect on BC concentration on rainy days. A quantitative relation is found between the product of BC and BLH during rainy and non-rainy days which indicates the extent of redistribution of BC during non-rainy days when compared to the rainy days. Further work on the wet removal processes of the BC is in progress considering the individual rain events and other related parameters like wind speed.Keywords: black carbon aerosols, atmospheric boundary layer, scavenging processes, tropical coastal location
Procedia PDF Downloads 15124985 Fuelwood Heating, Felling, Energy Renewing in Total Fueling of Fuelwood, Renewable Technologies
Authors: Adeiza Matthew, Oluwamishola Abubakar
Abstract:
In conclusion, Fuelwood is a traditional and renewable source of energy that can have both positive and negative impacts. Adopting sustainable practices for its collection, transportation, and use and investing in renewable technologies can help mitigate the negative effects and provide a clean and reliable source of energy, improve living standards and support economic development. For example, solar energy can be used to generate electricity, heat homes and water, and can even be used for cooking. Wind energy can be used to generate electricity, and geothermal energy can be used for heating and cooling. Biogas can be produced from waste products such as animal manure, sewage, and organic kitchen waste and can be used for cooking and lighting.Keywords: calorific, BTU, wood moisture content, density of wood
Procedia PDF Downloads 10324984 Seismic Data Scaling: Uncertainties, Potential and Applications in Workstation Interpretation
Authors: Ankur Mundhra, Shubhadeep Chakraborty, Y. R. Singh, Vishal Das
Abstract:
Seismic data scaling affects the dynamic range of a data and with present day lower costs of storage and higher reliability of Hard Disk data, scaling is not suggested. However, in dealing with data of different vintages, which perhaps were processed in 16 bits or even 8 bits and are need to be processed with 32 bit available data, scaling is performed. Also, scaling amplifies low amplitude events in deeper region which disappear due to high amplitude shallow events that saturate amplitude scale. We have focused on significance of scaling data to aid interpretation. This study elucidates a proper seismic loading procedure in workstations without using default preset parameters as available in most software suites. Differences and distribution of amplitude values at different depth for seismic data are probed in this exercise. Proper loading parameters are identified and associated steps are explained that needs to be taken care of while loading data. Finally, the exercise interprets the un-certainties which might arise when correlating scaled and unscaled versions of seismic data with synthetics. As, seismic well tie correlates the seismic reflection events with well markers, for our study it is used to identify regions which are enhanced and/or affected by scaling parameter(s).Keywords: clipping, compression, resolution, seismic scaling
Procedia PDF Downloads 46724983 Site Analysis’ Importance as a Valid Factor in Building Design
Authors: Mekwa Eme, Anya chukwuma
Abstract:
The act of evaluating a particular site physically and socially in order to create a good design solution that will address the physical and interior environment of the location is known as architectural site analysis. This essay will describe site analysis as a useful design component. According to the introduction and supporting research, site evaluation and analysis are crucial to good design in terms of topography, orientation, site size, accessibility, rainfall, wind direction, and times of sunrise and sunset. Methodology: Both quantitative and qualitative analyses are used in this paper. The primary and secondary types of data collection are as follows. This information was gathered via the case study approach, already published literature, journals, the internet, a local poll, oral interviews, inquiries, and in-person interviews. The purpose of this is to clarify the benefits of site analysis for the design process and its implications for the working or building stage. Results: Each site's criteria are unique in terms of things like soil, plants, trees, accessibility, topography, and security. This will make it easier for the architect and environmentalist to decide on the idea, shape, and supporting structures of the design. It is crucial because before any design work is done, the nature of the target location will be determined through site visits and research. The location, contours, site features, and accessibility are just a few of the topics included in this site study. In order for students and working architects to understand the nature of the site they will be working on, site analysis is a key component of architectural education. The building's orientation, the site's circulation, and the sustainability of the site may all be determined with thorough research of the site's features.Keywords: analysis, climate, statistics, design
Procedia PDF Downloads 24724982 Association of Social Data as a Tool to Support Government Decision Making
Authors: Diego Rodrigues, Marcelo Lisboa, Elismar Batista, Marcos Dias
Abstract:
Based on data on child labor, this work arises questions about how to understand and locate the factors that make up the child labor rates, and which properties are important to analyze these cases. Using data mining techniques to discover valid patterns on Brazilian social databases were evaluated data of child labor in the State of Tocantins (located north of Brazil with a territory of 277000 km2 and comprises 139 counties). This work aims to detect factors that are deterministic for the practice of child labor and their relationships with financial indicators, educational, regional and social, generating information that is not explicit in the government database, thus enabling better monitoring and updating policies for this purpose.Keywords: social data, government decision making, association of social data, data mining
Procedia PDF Downloads 36824981 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation
Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang
Abstract:
Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven
Procedia PDF Downloads 1124980 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform
Authors: Sadam Alwadi
Abstract:
Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.Keywords: outlier values, imputation, stock market data, detecting, estimation
Procedia PDF Downloads 8024979 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage
Authors: P. Jayashree, S. Rajkumar
Abstract:
With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding
Procedia PDF Downloads 29224978 Iot Device Cost Effective Storage Architecture and Real-Time Data Analysis/Data Privacy Framework
Authors: Femi Elegbeleye, Omobayo Esan, Muienge Mbodila, Patrick Bowe
Abstract:
This paper focused on cost effective storage architecture using fog and cloud data storage gateway and presented the design of the framework for the data privacy model and data analytics framework on a real-time analysis when using machine learning method. The paper began with the system analysis, system architecture and its component design, as well as the overall system operations. The several results obtained from this study on data privacy model shows that when two or more data privacy model is combined we tend to have a more stronger privacy to our data, and when fog storage gateway have several advantages over using the traditional cloud storage, from our result shows fog has reduced latency/delay, low bandwidth consumption, and energy usage when been compare with cloud storage, therefore, fog storage will help to lessen excessive cost. This paper dwelt more on the system descriptions, the researchers focused on the research design and framework design for the data privacy model, data storage, and real-time analytics. This paper also shows the major system components and their framework specification. And lastly, the overall research system architecture was shown, its structure, and its interrelationships.Keywords: IoT, fog, cloud, data analysis, data privacy
Procedia PDF Downloads 9624977 Comparison of Selected Pier-Scour Equations for Wide Piers Using Field Data
Authors: Nordila Ahmad, Thamer Mohammad, Bruce W. Melville, Zuliziana Suif
Abstract:
Current methods for predicting local scour at wide bridge piers, were developed on the basis of laboratory studies and very limited scour prediction were tested with field data. Laboratory wide pier scour equation from previous findings with field data were presented. A wide range of field data were used and it consists of both live-bed and clear-water scour. A method for assessing the quality of the data was developed and applied to the data set. Three other wide pier-scour equations from the literature were used to compare the performance of each predictive method. The best-performing scour equation were analyzed using statistical analysis. Comparisons of computed and observed scour depths indicate that the equation from the previous publication produced the smallest discrepancy ratio and RMSE value when compared with the large amount of laboratory and field data.Keywords: field data, local scour, scour equation, wide piers
Procedia PDF Downloads 41124976 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol
Authors: Inkyu Kim, SangMan Moon
Abstract:
This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application
Procedia PDF Downloads 39124975 Methods for Distinction of Cattle Using Supervised Learning
Authors: Radoslav Židek, Veronika Šidlová, Radovan Kasarda, Birgit Fuerst-Waltl
Abstract:
Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.Keywords: genetic data, Pinzgau cattle, supervised learning, machine learning
Procedia PDF Downloads 54924974 Router 1X3 - RTL Design and Verification
Authors: Nidhi Gopal
Abstract:
Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.Keywords: data packets, networking, router, routing
Procedia PDF Downloads 81124973 Aerodynamic Effects of Ice and Its Influences on Flight Characteristics of Low Speed Unmanned Aerial Vehicles
Authors: I. McAndrew, K. L. Witcher, E. Navarro
Abstract:
This paper presents the theory and application of low-speed flight for unmanned aerial vehicles when subjected to surface environmental conditions such as ice on the leading edge and upper surface. A model was developed and tested in a wind tunnel to see how theory compares with practice at various speed including take-off, landing and operational applications where head winds substantially alter parameters. Furthermore, a comparison is drawn with maned operations and how that this subject is currently under-supported with accurate theory or knowledge for designers or operators to make informed decision or accommodate individual applications. The effects of ice formation for lift and drag are determined for a range of different angles of attacks.Keywords: aerodynamics, environmental influences, glide path ratio, unmanned vehicles
Procedia PDF Downloads 32724972 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: web log data, web user profile, user interest, noise web data learning, machine learning
Procedia PDF Downloads 26324971 Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study
Authors: Zeba Mahmood
Abstract:
The modern business organizations are adopting technological advancement to achieve competitive edge and satisfy their consumer. The development in the field of Information technology systems has changed the way of conducting business today. Business operations today rely more on the data they obtained and this data is continuously increasing in volume. The data stored in different locations is difficult to find and use without the effective implementation of Data mining and Knowledge management techniques. Organizations who smartly identify, obtain and then convert data in useful formats for their decision making and operational improvements create additional value for their customers and enhance their operational capabilities. Marketers and Customer relationship departments of firm use Data mining techniques to make relevant decisions, this paper emphasizes on the identification of different data mining and Knowledge management techniques that are applied to different business industries. The challenges and issues of execution of these techniques are also discussed and critically analyzed in this paper.Keywords: knowledge, knowledge management, knowledge discovery in databases, business, operational, information, data mining
Procedia PDF Downloads 53724970 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data
Authors: Adarsh Shroff
Abstract:
Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.Keywords: big data, map reduce, incremental processing, iterative computation
Procedia PDF Downloads 34924969 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010
Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk
Abstract:
Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.Keywords: air pollution, asthma, disease cluster, industry
Procedia PDF Downloads 28124968 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach
Authors: Jerry Q. Cheng
Abstract:
Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing
Procedia PDF Downloads 16324967 Maintenance Alternatives Related to Costs of Wind Turbines Using Finite State Markov Model
Authors: Boukelkoul Lahcen
Abstract:
The cumulative costs for O&M may represent as much as 65%-90% of the turbine's investment cost. Nowadays the cost effectiveness concept becomes a decision-making and technology evaluation metric. The cost of energy metric accounts for the effect replacement cost and unscheduled maintenance cost parameters. One key of the proposed approach is the idea of maintaining the WTs which can be captured via use of a finite state Markov chain. Such a model can be embedded within a probabilistic operation and maintenance simulation reflecting the action to be done. In this paper, an approach of estimating the cost of O&M is presented. The finite state Markov model is used for decision problems with number of determined periods (life cycle) to predict the cost according to various options of maintenance.Keywords: cost, finite state, Markov model, operation and maintenance
Procedia PDF Downloads 53124966 Numerical Study on the Effect of Spudcan Penetration on the Jacket Platform
Authors: Xiangming Ge, Bing Pan, Wei He, Hao Chen, Yong Zhou, Jiayao Wu, Weijiang Chu
Abstract:
How the extraction and penetration of spudcan affect the performance of the adjacent pile foundation supporting the jacket platform was studied in the program FLAC3D depending on a wind farm project in Bohai sea. The simulations were conducted at the end of the spudcan penetration, which induced a pockmark in the seabed. The effects of the distance between the pile foundation and the pockmark were studied. The displacement at the mudline arose when the pockmark was closer. The bearing capacity of this jacket platform with deep pile foundations has been less influenced by the process of spudcan penetration, which can induce severe stresses on the pile foundation. The induced rotation was also satisfied with the rotation-controlling criteria.Keywords: offshore foundation, pile-soil interaction, spudcan penetration, FLAC3D
Procedia PDF Downloads 21324965 Adoption of Big Data by Global Chemical Industries
Authors: Ashiff Khan, A. Seetharaman, Abhijit Dasgupta
Abstract:
The new era of big data (BD) is influencing chemical industries tremendously, providing several opportunities to reshape the way they operate and help them shift towards intelligent manufacturing. Given the availability of free software and the large amount of real-time data generated and stored in process plants, chemical industries are still in the early stages of big data adoption. The industry is just starting to realize the importance of the large amount of data it owns to make the right decisions and support its strategies. This article explores the importance of professional competencies and data science that influence BD in chemical industries to help it move towards intelligent manufacturing fast and reliable. This article utilizes a literature review and identifies potential applications in the chemical industry to move from conventional methods to a data-driven approach. The scope of this document is limited to the adoption of BD in chemical industries and the variables identified in this article. To achieve this objective, government, academia, and industry must work together to overcome all present and future challenges.Keywords: chemical engineering, big data analytics, industrial revolution, professional competence, data science
Procedia PDF Downloads 8424964 [Keynote Talk]: Aerodynamic Effects of Ice and Its Influences on Flight Characteristics of Low Speed Unmanned Aerial Vehicles
Authors: I. McAndrew, K. L. Witcher, E. Navarro
Abstract:
This paper presents the theory and application of low speed flight for unmanned aerial vehicles when subjected to surface environmental conditions such as ice on the leading edge and upper surface. A model was developed and tested in a wind tunnel to see how theory compares with practice at various speed including take-off, landing and operational applications where head winds substantially alter parameters. Furthermore, a comparison is drawn with maned operations and how that this subject is currently under supported with accurate theory or knowledge for designers or operators to make informed decision or accommodate individual applications. The effects of ice formation for lift and drag are determined for a range of different angles of attacks.Keywords: aerodynamics, low speed flight, unmanned vehicles, environmental influences
Procedia PDF Downloads 43724963 Evaluation of Soil Modulus Variation by IS 2911 and Broms Method
Authors: Mandeep Kamboj, Anand R. Katti
Abstract:
The pile of 2.4 m diameter is subjected to lateral loads and moments. These lateral loads are caused due to wind/wave forces when used in foundations of various structures such as bridge piers and high rise towers exhibiting deflections with depth. The research scientist and developer has studied and developed various procedures to evaluate the coefficient of soil modulus variation (nh), using various methods. These are verified for slender piles in sand with various diameters up to 2.4 m. The subject explains about simplified approach of the theoretical values using IS procedure and Broms method and compared with actual field soil pressure/displacement distributions measured in mono-pile along its length and across the diameter.Keywords: bridge pier, lateral loads, mono-pile, slender piles
Procedia PDF Downloads 18724962 Secure Multiparty Computations for Privacy Preserving Classifiers
Authors: M. Sumana, K. S. Hareesha
Abstract:
Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data
Procedia PDF Downloads 410