Search results for: data block
25430 Seismic Data Scaling: Uncertainties, Potential and Applications in Workstation Interpretation
Authors: Ankur Mundhra, Shubhadeep Chakraborty, Y. R. Singh, Vishal Das
Abstract:
Seismic data scaling affects the dynamic range of a data and with present day lower costs of storage and higher reliability of Hard Disk data, scaling is not suggested. However, in dealing with data of different vintages, which perhaps were processed in 16 bits or even 8 bits and are need to be processed with 32 bit available data, scaling is performed. Also, scaling amplifies low amplitude events in deeper region which disappear due to high amplitude shallow events that saturate amplitude scale. We have focused on significance of scaling data to aid interpretation. This study elucidates a proper seismic loading procedure in workstations without using default preset parameters as available in most software suites. Differences and distribution of amplitude values at different depth for seismic data are probed in this exercise. Proper loading parameters are identified and associated steps are explained that needs to be taken care of while loading data. Finally, the exercise interprets the un-certainties which might arise when correlating scaled and unscaled versions of seismic data with synthetics. As, seismic well tie correlates the seismic reflection events with well markers, for our study it is used to identify regions which are enhanced and/or affected by scaling parameter(s).Keywords: clipping, compression, resolution, seismic scaling
Procedia PDF Downloads 47125429 3rd Generation Modular Execution: A Global Breakthrough in Modular Facility Construction System
Authors: Sean Bryner S. Rey, Eric Tanjutco
Abstract:
Modular execution strategies are performed to address the various challenges of any projects and are implemented on each project phase that covers Engineering, Procurement, Fabrication and Construction. It was until the recent years that the intent to surpass mechanical modularization approach were conceptualized to give solution to much greater demands of project components such as site location and adverse weather condition, material sourcing, construction schedule, safety risks and overall plot layout and allocation. The intent of this paper is to introduce the 3rd Generation Modular Execution with an overview of its advantages on project execution and will give emphasis on Engineering, Construction, Operation and Maintenance. Most importantly, the paper will present the key differentiator of 3rd Gen modular execution against other conventional project execution and the merits it bears for the industry.Keywords: 3rd generation modular, process block, construction, operation & maintenance
Procedia PDF Downloads 47525428 Association of Social Data as a Tool to Support Government Decision Making
Authors: Diego Rodrigues, Marcelo Lisboa, Elismar Batista, Marcos Dias
Abstract:
Based on data on child labor, this work arises questions about how to understand and locate the factors that make up the child labor rates, and which properties are important to analyze these cases. Using data mining techniques to discover valid patterns on Brazilian social databases were evaluated data of child labor in the State of Tocantins (located north of Brazil with a territory of 277000 km2 and comprises 139 counties). This work aims to detect factors that are deterministic for the practice of child labor and their relationships with financial indicators, educational, regional and social, generating information that is not explicit in the government database, thus enabling better monitoring and updating policies for this purpose.Keywords: social data, government decision making, association of social data, data mining
Procedia PDF Downloads 37125427 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation
Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang
Abstract:
Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven
Procedia PDF Downloads 2025426 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis
Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson
Abstract:
Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution
Procedia PDF Downloads 18525425 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform
Authors: Sadam Alwadi
Abstract:
Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.Keywords: outlier values, imputation, stock market data, detecting, estimation
Procedia PDF Downloads 8225424 Investigation on Optical Performance of Operational Shutter Panels for Transparent Displays
Authors: Jaehong Kim, Sunhee Park, HongSeop Shin, Kyongho Lim, Suhyun Kwon, Don-Gyou Lee, Pureum Kim, Moojong Lim, JongSang Baek
Abstract:
Transparent displays with OLEDs are the most commonly produced forms of see-through displays on the market or in development. In order to block the visual interruption caused by the light coming from the background, the special panel is combined with transparent displays with OLEDs. There is, however, few studies optical performance of operational shutter panel for transparent displays until now. This paper, therefore, describes the optical performance of operational shutter panels. The novel evaluation method was developed by measuring the amount of light which can form a transmitted background image. The new proposed method could tell how recognize transmitted background images cannot be seen, and is consistent with viewer’s perception.Keywords: transparent display, operational shutter panel, optical performance, OLEDs
Procedia PDF Downloads 44525423 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage
Authors: P. Jayashree, S. Rajkumar
Abstract:
With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding
Procedia PDF Downloads 29625422 Iot Device Cost Effective Storage Architecture and Real-Time Data Analysis/Data Privacy Framework
Authors: Femi Elegbeleye, Omobayo Esan, Muienge Mbodila, Patrick Bowe
Abstract:
This paper focused on cost effective storage architecture using fog and cloud data storage gateway and presented the design of the framework for the data privacy model and data analytics framework on a real-time analysis when using machine learning method. The paper began with the system analysis, system architecture and its component design, as well as the overall system operations. The several results obtained from this study on data privacy model shows that when two or more data privacy model is combined we tend to have a more stronger privacy to our data, and when fog storage gateway have several advantages over using the traditional cloud storage, from our result shows fog has reduced latency/delay, low bandwidth consumption, and energy usage when been compare with cloud storage, therefore, fog storage will help to lessen excessive cost. This paper dwelt more on the system descriptions, the researchers focused on the research design and framework design for the data privacy model, data storage, and real-time analytics. This paper also shows the major system components and their framework specification. And lastly, the overall research system architecture was shown, its structure, and its interrelationships.Keywords: IoT, fog, cloud, data analysis, data privacy
Procedia PDF Downloads 10025421 Comparison of Selected Pier-Scour Equations for Wide Piers Using Field Data
Authors: Nordila Ahmad, Thamer Mohammad, Bruce W. Melville, Zuliziana Suif
Abstract:
Current methods for predicting local scour at wide bridge piers, were developed on the basis of laboratory studies and very limited scour prediction were tested with field data. Laboratory wide pier scour equation from previous findings with field data were presented. A wide range of field data were used and it consists of both live-bed and clear-water scour. A method for assessing the quality of the data was developed and applied to the data set. Three other wide pier-scour equations from the literature were used to compare the performance of each predictive method. The best-performing scour equation were analyzed using statistical analysis. Comparisons of computed and observed scour depths indicate that the equation from the previous publication produced the smallest discrepancy ratio and RMSE value when compared with the large amount of laboratory and field data.Keywords: field data, local scour, scour equation, wide piers
Procedia PDF Downloads 41525420 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol
Authors: Inkyu Kim, SangMan Moon
Abstract:
This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application
Procedia PDF Downloads 39325419 The Seedlings Pea (Pisum Sativum L.) Have A High Potential To Be Used As A Promising Condidate For The Study Of Phytoremediation Mechanisms Following An Aromatic Polycyclic Hydrocarbon (Hap) Contamination Such As Naphtalene
Authors: Agoun-bahar Salima
Abstract:
The environmental variations to which plants are subjected require them to have a strong capacity for adaptation. Some plants are affected by pollutants and are used as pollution indicators; others have the capacity to block, extract, accumulate, transform or degrade the xenobiotic. The diversity of the legume family includes around 20 000 species and offers opportunities for exploitation through their agronomic, dietary and ecological interests. The lack of data on the bioavailability of the Aromatic Polycyclic Hydrocarbon (PAH) in polluted environments, as their passage in the food chains and on the effects of interaction with other pollutants, justifies priority research on this vast family of hydrocarbons. Naphthalene is a PAH formed from two aromatic rings, it is listed and classified as priority pollutant in the list of 16 PAH by the United States Environmental Protection Agency. The aim of this work was to determinate effect of naphthalene at different concentrations on morphological and physiological responses of pea seedlings. At the same time, the behavior of the pollutant in the soil and its fate at the different parts of plant (roots, stems, leaves and fruits) were also recorded by Gas Chromatography/ Mass Spectrometry (GC / MS). In it controlled laboratory studies, plants exposed to naphthalene were able to grow efficiently. From a quantitative analysis, 67% of the naphthalene was removed from the soil and then found on the leaves of the seedlings in just three weeks of cultivation. Interestingly, no trace of naphthalene or its derivatives were detected on the chromatograms corresponding to the dosage of the pollutant at the fruit level after ten weeks of cultivating the seedlings and this for all the pollutant concentrations used. The pea seedlings seem to tolerate the pollutant when it is applied to the soil. In conclusion, the pea represents an interesting biological model in the study of phytoremediation mechanisms.Keywords: naphtalene, PAH, Pea, phytoremediation, pollution
Procedia PDF Downloads 7825418 Methods for Distinction of Cattle Using Supervised Learning
Authors: Radoslav Židek, Veronika Šidlová, Radovan Kasarda, Birgit Fuerst-Waltl
Abstract:
Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.Keywords: genetic data, Pinzgau cattle, supervised learning, machine learning
Procedia PDF Downloads 55225417 Router 1X3 - RTL Design and Verification
Authors: Nidhi Gopal
Abstract:
Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.Keywords: data packets, networking, router, routing
Procedia PDF Downloads 81525416 Solutions to Reduce CO2 Emissions in Autonomous Robotics
Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu
Abstract:
Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy
Procedia PDF Downloads 42225415 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: web log data, web user profile, user interest, noise web data learning, machine learning
Procedia PDF Downloads 26525414 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors
Authors: Saeed Vahedikamal, Ian Hepburn
Abstract:
Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID
Procedia PDF Downloads 9825413 Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study
Authors: Zeba Mahmood
Abstract:
The modern business organizations are adopting technological advancement to achieve competitive edge and satisfy their consumer. The development in the field of Information technology systems has changed the way of conducting business today. Business operations today rely more on the data they obtained and this data is continuously increasing in volume. The data stored in different locations is difficult to find and use without the effective implementation of Data mining and Knowledge management techniques. Organizations who smartly identify, obtain and then convert data in useful formats for their decision making and operational improvements create additional value for their customers and enhance their operational capabilities. Marketers and Customer relationship departments of firm use Data mining techniques to make relevant decisions, this paper emphasizes on the identification of different data mining and Knowledge management techniques that are applied to different business industries. The challenges and issues of execution of these techniques are also discussed and critically analyzed in this paper.Keywords: knowledge, knowledge management, knowledge discovery in databases, business, operational, information, data mining
Procedia PDF Downloads 53825412 Description of Geotechnical Properties of Jabal Omar
Authors: Ibrahim Abdel Gadir Malik, Dafalla Siddig Dafalla, Osama Abdelgadir El-Bushra
Abstract:
Geological and engineering characteristics of intact rock and the discontinuity surfaces was used to describe and classify rock mass into zones based on mechanical and physical properties. Many conditions terms that affect the rock mas; such as Rock strength, Rock Quality Designation (RQD) value, joint spacing, and condition of joint, water condition with block size, joint roughness, separation, joint hardness, friction angle and weathering were used to classify the rock mass into: Good quality (class II) (RMR values range between 75% and 56%), Good to fair quality (class II to III) (RMR values range between 70% and 55%), Fair quality (class III) (RMR values range between 60% and 50%) and Fair to poor quality (Class III to IV) (RMR values, range between (50% and 35%).Keywords: rock strength, RQD, joints, weathering
Procedia PDF Downloads 41725411 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data
Authors: Adarsh Shroff
Abstract:
Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.Keywords: big data, map reduce, incremental processing, iterative computation
Procedia PDF Downloads 35425410 Evaluation of Commercial Herbicides for Weed Control and Yield under Direct Dry Seeded Rice Cultivation System in Pakistan
Authors: Sanaullah Jalil, Abid Majeed, Syed Haider Abbas
Abstract:
Direct dry seeded rice cultivation system is an emerging production technology in Pakistan. Weeds are a major constraint to the success of direct dry seeded rice (DDSR). Studies were carried out for two years during 2015 and 2016 to evaluate the performance of applications of pre-emergence herbicides (Top Max @ 2.25 lit/ha, Click @1.5 lit/ha and Pendimethaline @ 1.25 lit/ha) and post-emergence herbicides (Clover @ 200 g/ha, Pyranex Gold @ 250 g/ha, Basagran @ 2.50 lit/ha, Sunstar Gold @ 50 g/ha and Wardan @ 1.25 lit/ha) at rice research field area of National Agriculture Research Center (NARC), Islamabad. The experiments were laid out in Randomized Complete Block Design (RCBD) with three replications. All evaluated herbicides reduced weed density and biomass by a significant amount. The net plot size was 2.5 x 5 m with 10 rows. Basmati-385 was used as test variety of rice. Data indicated that Top Max and Click provided best weed control efficiency but suppressed the germination of rice seed which causes the lowest grain yield production (680.6 kg/ha and 314.5 kg/ha respectively). A weedy check plot contributed 524.7 kg/ha paddy yield with highest weed density. Pyranex Gold provided better weed control efficiency and contributed to significantly higher paddy yield 5116.6 kg/ha than that of all other herbicide applications followed by the Clover which give paddy yield 4241.7 kg/ha. The results of our study suggest that pre-emergence herbicides provided best weed control but not fit for direct dry seeded rice (DDSR) cultivation system, and therefore post-emergence herbicides (Pyranex Gold and Clover) can be suggested for weed control and higher yield.Keywords: pyranex gold, clover, direct dry seeded rice (DDSR), yield
Procedia PDF Downloads 26325409 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach
Authors: Jerry Q. Cheng
Abstract:
Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing
Procedia PDF Downloads 16725408 Low Complexity Deblocking Algorithm
Authors: Jagroop Singh Sidhu, Buta Singh
Abstract:
A low computational deblocking filter including three frequency related modes (smooth mode, intermediate mode, and non-smooth mode for low-frequency, mid-frequency, and high frequency regions, respectively) is proposed. The suggested approach requires zero additions, zero subtractions, zero multiplications (for intermediate region), no divisions (for non-smooth region) and no comparison. The suggested method thus keeps the computation lower and thus suitable for image coding systems based on blocks. Comparison of average number of operations for smooth, non-smooth, intermediate (per pixel vector for each block) using filter suggested by Chen and the proposed method filter suggests that the proposed filter keeps the computation lower and is thus suitable for fast processing algorithms.Keywords: blocking artifacts, computational complexity, non-smooth, intermediate, smooth
Procedia PDF Downloads 46425407 Assessment of Microorganisms in Irrigation Water Collected from Various Vegetable Growing Areas of SWAT Valley, Khyber Pakhtunkhwa
Authors: Islam Zeb
Abstract:
Water of poor quality has a potential of probable contamination and a way to spread pollutant in the field and surrounding environment. A number of comprehensive reviews articles have been published which highlight irrigation water as a source of pathogenic microorganisms and heavy metals toxicity that leads to chronic diseases in human. Here a study was plan to determine the microbial status of irrigation water collected from various location of district Swat in various months. The analyses were carried out at Environmental Horticulture Laboratory, Department of Horticulture, The University of Agriculture Peshawar, during the year 2018 – 19. The experiment was laid out in Randomized Complete Block Design (RCBD) with two factors and three replicates. Factor A consist of different locations, and factor B represent various months. The results of microbial status for various locations in irrigation water showed the highest value for Total Bacterial Count, Enterobacteriacea, E. coli, Salmonella, and Listeria (9.05, 8.54, 6.01, 5.84, and 5.03 log cfu L-1 respectively) for samples collected from mingora location, whereas the lowest values for Total Bacterial Count, Enterobacteriacea, E. coli, Salmonella and Listeria (6.70, 6.38, 4.47, 4.42 and 3.77 log cfu L-1 respectively) were observed for matta location. Data for various months showed maximum Total Bacterial Count, Enterobacteriacea, E. coli, Salmonella, and Listeria (12.01, 11.70, 8.46, 8.41, and 6.88 log cfu L-1, respectively) were noted for the irrigation water samples collected in May/June whereas the lowest range for Total Bacterial Count, Enterobacteriacea, E. coli, Salmonella and Listeria (4.41, 4.08, 2.61, 2.55 and 3.39 log cfu L-1 respectively) were observed in Jan/Feb. A significant interaction was found for all the studied parameters it was concluded that maximum bacterial groups were recorded in the months of May/June from Mingora location, it might be due to favorable weather condition.Keywords: contamination, irrigation water, microbes, SWAT, various months
Procedia PDF Downloads 6625406 Adoption of Big Data by Global Chemical Industries
Authors: Ashiff Khan, A. Seetharaman, Abhijit Dasgupta
Abstract:
The new era of big data (BD) is influencing chemical industries tremendously, providing several opportunities to reshape the way they operate and help them shift towards intelligent manufacturing. Given the availability of free software and the large amount of real-time data generated and stored in process plants, chemical industries are still in the early stages of big data adoption. The industry is just starting to realize the importance of the large amount of data it owns to make the right decisions and support its strategies. This article explores the importance of professional competencies and data science that influence BD in chemical industries to help it move towards intelligent manufacturing fast and reliable. This article utilizes a literature review and identifies potential applications in the chemical industry to move from conventional methods to a data-driven approach. The scope of this document is limited to the adoption of BD in chemical industries and the variables identified in this article. To achieve this objective, government, academia, and industry must work together to overcome all present and future challenges.Keywords: chemical engineering, big data analytics, industrial revolution, professional competence, data science
Procedia PDF Downloads 8625405 Secure Multiparty Computations for Privacy Preserving Classifiers
Authors: M. Sumana, K. S. Hareesha
Abstract:
Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data
Procedia PDF Downloads 41225404 Decline in Melon Yield and Its Contribution to Young Farmers' Diversification into Watermelon Farming in Oyo State, Nigeria
Authors: Oyediran Wasiu Oyeleke
Abstract:
Melon is a popular economic cucurbit in Southwest, Nigeria. In recent time, many young farmers are shifting from melon to watermelon farming due to poor yield and low monetary returns. Hence, this study was carried out to assess the decline in melon yield and its contribution to young farmers’ diversification into watermelon farming in Oyo state, Nigeria. Purposive sampling technique was used in selecting 75 respondents from five villages in Ibarapa block of the Oyo State Agricultural Development Project (ADP). Data collected were analyzed using descriptive statistics and Pearson Product Moment Correlation (PPMC). Results show that majority of the respondents (77.3%) were between 31-40 years of age and 46.70% had secondary school education. Most of the respondents (80%) cultivated more than 3 ha of land for watermelon. Majority of the respondents (74.7%) intercropped melon with other crops while watermelon was cultivated as a sole crop. None of the respondents either grew improved melon seeds (certified seeds) or applied fertilizers but all respondents cultivated treated watermelon seeds, applied fertilizers, and agro-chemicals. The average yields of melon fell from 376.53kg/ha in 2009 to 280.70kg/ha in 2011. However, the respondents were shifting into watermelon production because of available quality seeds and its early maturity, easy harvest, and high sales. There was a significant relationship between melon output and young farmers’ diversification to watermelon in the study area at p < 0.05. The study concluded that decline in the melon yield discouraged youth to continue melon farming in the study area. It is hereby recommended that certified melon seeds should be made available while extension service providers should provide training support for the young farmers in order to reposition and boost melon production in the study area.Keywords: decline, melon yield, contribution, watermelon, diversification, young farmers
Procedia PDF Downloads 18825403 Efficacy of Methyl Eugenol and Food-Based Lures in Trapping Oriental Fruit Fly Bactrocera dorsalis (Diptera: Tephritidae) on Mango Homestead Trees
Authors: Juliana Amaka Ugwu
Abstract:
Trapping efficiency of methyl eugenol and three locally made food-based lures were evaluated in three locations for trapping of B. dorsalis on mango homestead trees in Ibadan South west Nigeria. The treatments were methyl eugenol, brewery waste, pineapple juice, orange juice, and control (water). The experiment was laid in a Complete Randomized Block Design (CRBD) and replicated three times in each location. Data collected were subjected to analysis of variance and significant means were separated by Turkey’s test. The results showed that B. dorsalis was recorded in all locations of study. Methyl eugenol significantly (P < 0.05) trapped higher population of B. dorsalis in all the study area. The population density of B. dorsalis was highest during the ripening period of mango in all locations. The percentage trapped flies after 7 weeks were 77.85%-82.38% (methyl eugenol), 7.29%-8.64% (pineapple juice), 5.62-7.62% (brewery waste), 4.41%-5.95% (orange juice), and 0.24-0.47% (control). There were no significance differences (p > 0.05) on the population of B. dorsalis trapped in all locations. Similarly, there were no significant differences (p > 0.05) on the population of flies trapped among the food attractants. However, the three food attractants significantly (p < 0.05) trapped higher flies than control. Methyl eugenol trapped only male flies while brewery waste and other food based attractants trapped both male and female flies. The food baits tested were promising attractants for trapping B. dorsalis on mango homestead tress, hence increased dosage could be considered for monitoring and mass trapping as management strategies against fruit fly infestation.Keywords: attractants, trapping, mango, Bactrocera dorsalis
Procedia PDF Downloads 12325402 Cross Project Software Fault Prediction at Design Phase
Authors: Pradeep Singh, Shrish Verma
Abstract:
Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. The earlier we predict the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven data sets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.Keywords: software metrics, fault prediction, cross project, within project.
Procedia PDF Downloads 34425401 Some Agricultural Characteristics of Cephalaria syriaca Lines Selected from a Population and Developed as Winter Type
Authors: Rahim Ada, Ahmet Tamkoç
Abstract:
The research was conducted in the “Randomized Complete Block Design” with three replications in research field of Agricultural Faculty, Selcuk University, Konya, Turkey. In study, a total of 9 Cephalaria syriaca promised lines (9, 37, 38, 42, Beyaz 4, 5 Beyaz, 13 Beyaz, 27 Beyaz, Başaklar 2), which were taken from Sivas population, and 1 population were evaluated in two growing seasons (2012-13 and 2013-14). According to the results, the highest plant height, first branch height, first head height, number of branches per plant, number of head per plant, head diameter,1000 seed weight, seed yield, oil content and oil yield were obtained respectively from Başaklar 2 (68.37 cm), Başaklar 2 (37.80 cm), Başaklar 2 (54.83 cm), 37 (7.73 number/plant), 42 (18.03 number/plant), 9 (10.30 mm), Başaklar 2 (19.33 g), 27 Beyaz (1254.2 kg ha-1), Başaklar 2 (28.77%), and 27 Beyaz (357.9 kg ha-1).Keywords: Cephalaria syriaca, yield, oil, population
Procedia PDF Downloads 474