Search results for: data block
25074 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components
Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia
Abstract:
Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses
Procedia PDF Downloads 5625073 Geographic Information System Using Google Fusion Table Technology for the Delivery of Disease Data Information
Authors: I. Nyoman Mahayasa Adiputra
Abstract:
Data in the field of health can be useful for the purposes of data analysis, one example of health data is disease data. Disease data is usually in a geographical plot in accordance with the area. Where the data was collected, in the city of Denpasar, Bali. Disease data report is still published in tabular form, disease information has not been mapped in GIS form. In this research, disease information in Denpasar city will be digitized in the form of a geographic information system with the smallest administrative area in the form of district. Denpasar City consists of 4 districts of North Denpasar, East Denpasar, West Denpasar and South Denpasar. In this research, we use Google fusion table technology for map digitization process, where this technology can facilitate from the administrator and from the recipient information. From the administrator side of the input disease, data can be done easily and quickly. From the receiving end of the information, the resulting GIS application can be published in a website-based application so that it can be accessed anywhere and anytime. In general, the results obtained in this study, divided into two, namely: (1) Geolocation of Denpasar and all of Denpasar districts, the process of digitizing the map of Denpasar city produces a polygon geolocation of each - district of Denpasar city. These results can be utilized in subsequent GIS studies if you want to use the same administrative area. (2) Dengue fever mapping in 2014 and 2015. Disease data used in this study is dengue fever case data taken in 2014 and 2015. Data taken from the profile report Denpasar Health Department 2015 and 2016. This mapping can be useful for the analysis of the spread of dengue hemorrhagic fever in the city of Denpasar.Keywords: geographic information system, Google fusion table technology, delivery of disease data information, Denpasar city
Procedia PDF Downloads 13225072 Inclusive Practices in Health Sciences: Equity Proofing Higher Education Programs
Authors: Mitzi S. Brammer
Abstract:
Given that the cultural make-up of programs of study in institutions of higher learning is becoming increasingly diverse, much has been written about cultural diversity from a university-level perspective. However, there are little data in the way of specific programs and how they address inclusive practices when teaching and working with marginalized populations. This research study aimed to discover baseline knowledge and attitudes of health sciences faculty, instructional staff, and students related to inclusive teaching/learning and interactions. Quantitative data were collected via an anonymous online survey (one designed for students and another designed for faculty/instructional staff) using a web-based program called Qualtrics. Quantitative data were analyzed amongst the faculty/instructional staff and students, respectively, using descriptive and comparative statistics (t-tests). Additionally, some participants voluntarily engaged in a focus group discussion in which qualitative data were collected around these same variables. Collecting qualitative data to triangulate the quantitative data added trustworthiness to the overall data. The research team analyzed collected data and compared identified categories and trends, comparing those data between faculty/staff and students, and reported results as well as implications for future study and professional practice.Keywords: inclusion, higher education, pedagogy, equity, diversity
Procedia PDF Downloads 6825071 An Analysis of Sequential Pattern Mining on Databases Using Approximate Sequential Patterns
Authors: J. Suneetha, Vijayalaxmi
Abstract:
Sequential Pattern Mining involves applying data mining methods to large data repositories to extract usage patterns. Sequential pattern mining methodologies used to analyze the data and identify patterns. The patterns have been used to implement efficient systems can recommend on previously observed patterns, in making predictions, improve usability of systems, detecting events, and in general help in making strategic product decisions. In this paper, identified performance of approximate sequential pattern mining defines as identifying patterns approximately shared with many sequences. Approximate sequential patterns can effectively summarize and represent the databases by identifying the underlying trends in the data. Conducting an extensive and systematic performance over synthetic and real data. The results demonstrate that ApproxMAP effective and scalable in mining large sequences databases with long patterns.Keywords: multiple data, performance analysis, sequential pattern, sequence database scalability
Procedia PDF Downloads 34625070 Medical Knowledge Management since the Integration of Heterogeneous Data until the Knowledge Exploitation in a Decision-Making System
Authors: Nadjat Zerf Boudjettou, Fahima Nader, Rachid Chalal
Abstract:
Knowledge management is to acquire and represent knowledge relevant to a domain, a task or a specific organization in order to facilitate access, reuse and evolution. This usually means building, maintaining and evolving an explicit representation of knowledge. The next step is to provide access to that knowledge, that is to say, the spread in order to enable effective use. Knowledge management in the medical field aims to improve the performance of the medical organization by allowing individuals in the care facility (doctors, nurses, paramedics, etc.) to capture, share and apply collective knowledge in order to make optimal decisions in real time. In this paper, we propose a knowledge management approach based on integration technique of heterogeneous data in the medical field by creating a data warehouse, a technique of extracting knowledge from medical data by choosing a technique of data mining, and finally an exploitation technique of that knowledge in a case-based reasoning system.Keywords: data warehouse, data mining, knowledge discovery in database, KDD, medical knowledge management, Bayesian networks
Procedia PDF Downloads 39625069 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company
Authors: Rahma Al Balushi
Abstract:
Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS
Procedia PDF Downloads 20825068 Importance of Ethics in Cloud Security
Authors: Pallavi Malhotra
Abstract:
This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.Keywords: IT ethics, cloud computing technology, cloud privacy and security, ethical education
Procedia PDF Downloads 32625067 The Feminism of Data Privacy and Protection in Africa
Authors: Olayinka Adeniyi, Melissa Omino
Abstract:
The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.Keywords: feminism, women, law, data, Africa
Procedia PDF Downloads 20825066 Evaluation of Practicality of On-Demand Bus Using Actual Taxi-Use Data through Exhaustive Simulations
Authors: Jun-ichi Ochiai, Itsuki Noda, Ryo Kanamori, Keiji Hirata, Hitoshi Matsubara, Hideyuki Nakashima
Abstract:
We conducted exhaustive simulations for data assimilation and evaluation of service quality for various setting in a new shared transportation system, called SAVS. Computational social simulation is a key technology to design recent social services like SAVS as new transportation service. One open issue in SAVS was to determine the service scale through the social simulation. Using our exhaustive simulation framework, OACIS, we did data-assimilation and evaluation of effects of SAVS based on actual tax-use data at Tajimi city, Japan. Finally, we get the conditions to realize the new service in a reasonable service quality.Keywords: on-demand bus sytem, social simulation, data assimilation, exhaustive simulation
Procedia PDF Downloads 32125065 Structural Parameter-Induced Focusing Pattern Transformation in CEA Microfluidic Device
Authors: Xin Shi, Wei Tan, Guorui Zhu
Abstract:
The contraction-expansion array (CEA) microfluidic device is widely used for particle focusing and particle separation. Without the introduction of external fields, it can manipulate particles using hydrodynamic forces, including inertial lift forces and Dean drag forces. The focusing pattern of the particles in a CEA channel can be affected by the structural parameter, block ratio, and flow streamlines. Here, two typical focusing patterns with five different structural parameters were investigated, and the force mechanism was analyzed. We present nine CEA channels with different aspect ratios based on the process of changing the particle equilibrium positions. The results show that 10-15 μm particles have the potential to generate a side focusing line as the structural parameter (¬R𝓌) increases. For a determined channel structure and target particles, when the Reynolds number (Rₑ) exceeds the critical value, the focusing pattern will transform from a single pattern to a double pattern. The parameter α/R𝓌 can be used to calculate the critical Reynolds number for the focusing pattern transformation. The results can provide guidance for microchannel design and biomedical analysis.Keywords: microfluidic, inertial focusing, particle separation, Dean flow
Procedia PDF Downloads 8025064 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 29025063 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 7725062 Analyzing Test Data Generation Techniques Using Evolutionary Algorithms
Authors: Arslan Ellahi, Syed Amjad Hussain
Abstract:
Software Testing is a vital process in software development life cycle. We can attain the quality of software after passing it through software testing phase. We have tried to find out automatic test data generation techniques that are a key research area of software testing to achieve test automation that can eventually decrease testing time. In this paper, we review some of the approaches presented in the literature which use evolutionary search based algorithms like Genetic Algorithm, Particle Swarm Optimization (PSO), etc. to validate the test data generation process. We also look into the quality of test data generation which increases or decreases the efficiency of testing. We have proposed test data generation techniques for model-based testing. We have worked on tuning and fitness function of PSO algorithm.Keywords: search based, evolutionary algorithm, particle swarm optimization, genetic algorithm, test data generation
Procedia PDF Downloads 19125061 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential
Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag
Abstract:
Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.Keywords: climate, reanalysis, renewable energy, solar radiation
Procedia PDF Downloads 20925060 Determination of Agricultural Characteristics of Smooth Bromegrass (Bromus inermis Leyss) Lines under Konya Regional Conditions
Authors: Abdullah Özköse, Ahmet Tamkoç
Abstract:
The present study was conducted to determine the yield and yield components of smooth bromegrass lines under the environmental conditions of the Konya region during the growing seasons between 2011 and 2013. The experiment was performed in the randomized complete block design (RCBD) with four replications. It was found that the selected lines had a statistically significant effect on all the investigated traits, except for the main stem length and the number of nodes in the main stem. According to the two-year average calculated for various parameters checked in the smooth bromegrass lines, the main stem length ranged from 71.6 cm to 79.1 cm, the main stem diameter from 2.12 mm from 2.70 mm, the number of nodes in the main stem from 3.2 to 3.7, the internode length from 11.6 cm to 18.9 cm, flag leaf length from 9.7 cm to 12.7 cm, flag leaf width from 3.58 cm to 6.04 mm, herbage yield from 221.3 kg da–1 to 354.7 kg da–1 and hay yield from 100.4 kg da–1 to 190.1 kg da–1. The study concluded that the smooth bromegrass lines differ in terms of yield and yield components. Therefore, it is very crucial to select suitable varieties of smooth bromegrass to obtain optimum yield.Keywords: semiarid region, smooth bromegrass, yield, yield components
Procedia PDF Downloads 27625059 Effect of Time and Rate of Nitrogen Application on the Malting Quality of Barley Yield in Sandy Soil
Authors: A. S. Talaab, Safaa, A. Mahmoud, Hanan S. Siam
Abstract:
A field experiment was conducted during the winter season of 2013/2014 in the barley production area of Dakhala – New Valley Governorate, Egypt to assess the effect of nitrogen rate and time of N fertilizer application on barley grain yield, yield components and N use efficiency of barley and their association with grain yield. The treatments consisted of three levels of nitrogen (0, 70 and 100 kg N/acre) and five application times. The experiment was laid out as a randomized complete block design with three replication. Results revealed that barley grain yield and yield components increased significantly in response to N rate. Splitting N fertilizer amount at several times result in significant effect on grain yield, yield components, protein content and N uptake efficiency when compared with the entire N was applied at once. Application of N at rate of 100 kg N/acre resulted in accumulation of nitrate in the subsurface soil > 30cm. When N application timing considered, less NO3 was found in the soil profile with splitting N application compared with all preplans application.Keywords: nitrogen use efficiency, splitting N fertilizer, barley, NO3
Procedia PDF Downloads 31425058 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37525057 Root Biomass Growth in Different Growth Stages of Wheat and Barley Cultivars
Abstract:
This work was conducted in greenhouse conditions in order to investigate root biomass growth of two bread wheat, two durum wheat and two barley cultivars that were grown in irrigated and dry lands, respectively. This work was planned with four replications at a Completely Randomized Block Design in 2011-2012 growing season. In the study, root biomass growth was evaluated at stages of stem elongation, complete of anthesis and full grain maturity. Results showed that there were significant differences between cultivars grown at dry and irrigated lands in all growth stages in terms of root biomass (P < 0.01). According to research results, all of growth stages, dry typed-bread and durum wheats generally had higher root biomass than irrigated typed-cultivars, furthermore that dry typed-barley cultivar, had higher root biomass at GS 31 and GS 69, however lower at GS 92 than Larende. In all cultivars, root biomass increased between GS 31 and GS 69 so that dry typed-cultivars had more root biomass increase than irrigated typed-cultivars. Root biomass of bread wheat increased between GS 69 and GS 92, however root biomass of barley and durum wheat decreased.Keywords: bread and durum wheat, barley, root biomass, different growth stage
Procedia PDF Downloads 60625056 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning
Authors: Yanwen Li, Shuguo Xie
Abstract:
In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.Keywords: gradient image, segmentation and extract, mean-shift algorithm, dictionary iearning
Procedia PDF Downloads 26725055 Analysis and Prediction of Netflix Viewing History Using Netflixlatte as an Enriched Real Data Pool
Authors: Amir Mabhout, Toktam Ghafarian, Amirhossein Farzin, Zahra Makki, Sajjad Alizadeh, Amirhossein Ghavi
Abstract:
The high number of Netflix subscribers makes it attractive for data scientists to extract valuable knowledge from the viewers' behavioural analyses. This paper presents a set of statistical insights into viewers' viewing history. After that, a deep learning model is used to predict the future watching behaviour of the users based on previous watching history within the Netflixlatte data pool. Netflixlatte in an aggregated and anonymized data pool of 320 Netflix viewers with a length 250 000 data points recorded between 2008-2022. We observe insightful correlations between the distribution of viewing time and the COVID-19 pandemic outbreak. The presented deep learning model predicts future movie and TV series viewing habits with an average loss of 0.175.Keywords: data analysis, deep learning, LSTM neural network, netflix
Procedia PDF Downloads 25525054 Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks
Authors: Jayesh M. Patel, Bharat P. Modi
Abstract:
The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.Keywords: cellular, Wi-Fi, mobile, smart phone
Procedia PDF Downloads 36625053 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 8725052 Empirical Acceleration Functions and Fuzzy Information
Authors: Muhammad Shafiq
Abstract:
In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data
Procedia PDF Downloads 30125051 Procedure for Monitoring the Process of Behavior of Thermal Cracking in Concrete Gravity Dams: A Case Study
Authors: Adriana de Paula Lacerda Santos, Bruna Godke, Mauro Lacerda Santos Filho
Abstract:
Several dams in the world have already collapsed, causing environmental, social and economic damage. The concern to avoid future disasters has stimulated the creation of a great number of laws and rules in many countries. In Brazil, Law 12.334/2010 was created, which establishes the National Policy on Dam Safety. Overall, this policy requires the dam owners to invest in the maintenance of their structures and to improve its monitoring systems in order to provide faster and straightforward responses in the case of an increase of risks. As monitoring tools, visual inspections has provides comprehensive assessment of the structures performance, while auscultation’s instrumentation has added specific information on operational or behavioral changes, providing an alarm when a performance indicator exceeds the acceptable limits. These limits can be set using statistical methods based on the relationship between instruments measures and other variables, such as reservoir level, time of the year or others instruments measuring. Besides the design parameters (uplift of the foundation, displacements, etc.) the dam instrumentation can also be used to monitor the behavior of defects and damage manifestations. Specifically in concrete gravity dams, one of the main causes for the appearance of cracks, are the concrete volumetric changes generated by the thermal origin phenomena, which are associated with the construction process of these structures. Based on this, the goal of this research is to propose a monitoring process of the thermal cracking behavior in concrete gravity dams, through the instrumentation data analysis and the establishment of control values. Therefore, as a case study was selected the Block B-11 of José Richa Governor Dam Power Plant, that presents a cracking process, which was identified even before filling the reservoir in August’ 1998, and where crack meters and surface thermometers were installed for its monitoring. Although these instruments were installed in May 2004, the research was restricted to study the last 4.5 years (June 2010 to November 2014), when all the instruments were calibrated and producing reliable data. The adopted method is based on simple linear correlations procedures to understand the interactions among the instruments time series, verifying the response times between them. The scatter plots were drafted from the best correlations, which supported the definition of the limit control values. Among the conclusions, it is shown that there is a strong or very strong correlation between ambient temperature and the crack meters and flowmeters measurements. Based on the results of the statistical analysis, it was possible to develop a tool for monitoring the behavior of the case study cracks. Thus it was fulfilled the goal of the research to develop a proposal for a monitoring process of the behavior of thermal cracking in concrete gravity dams.Keywords: concrete gravity dam, dams safety, instrumentation, simple linear correlation
Procedia PDF Downloads 29225050 Maize Farmers’ Perception of Sharp Practices among Agro-Input Dealers in Ibadan/Ibarapa Agricultural Zone, Oyo State
Authors: Ademola A. Ladele, Peace I. Aburime
Abstract:
Fake and substandard agricultural inputs pose a serious stumbling block to farm productivity and subsequently improved livelihood. There is, therefore, a need to pave ways for sustainable agriculture and self-sufficiency in food production by proffering solutions to this challenge. Maize farmers' perception of sharp practices among agro-input dealers in Ibadan/Ibarapa agricultural zone in Oyo state was therefore investigated. A multi-stage random sampling technique was used to select registered maize farmers in the Ibadan/Ibarapa agricultural zone of the Oyo State Agricultural Development Programme (OYSADEP). A structured questionnaire was used to collect information on the perception of sharp practices and the effects of sharp practices. A total of seventy-five maize farmers were interviewed. A focus group discussion was organized to identify ways of curbing sharp practices to complement the survey. Data were analyzed using descriptive statistics, Chi-square, and Pearson Product Moment Correlation (PPMC). Forms of sharp practices indicated were sales of expired fertilizers, expired pesticides, expired herbicides, underweight fertilizers, adulterated fertilizers, adulterated herbicides, packs containing broken seeds, infested seeds, lack of truth in labeling/wrong labels, manipulation of measuring scales, and false declaration of hecterages covered by tractor operators. The majority had unfavorable perception of agro-input dealers on sharp practices. A significant relationship was observed between respondents’ level of education and their perception of sharp practices. There were no significant relationships between respondents’ sex, marital status and religion, and their perception of sharp practices. A significant correlation exists between the forms of sharp practices and the perceived effect on agricultural production. It is concluded that the perceived effect of sharp practices was critical and the endemic culture of sharp practices prevailed in agro-input in Ibadan/Ibarapa agricultural zone. A standard regulatory system that will certify and monitor the quality of inputs should be put in place.Keywords: agricultural productivity, agro-input dealers, maize farmers, sharp practices
Procedia PDF Downloads 20025049 Visual Working Memory, Reading Abilities, and Vocabulary in Mexican Deaf Signers
Authors: A. Mondaca, E. Mendoza, D. Jackson-Maldonado, A. García-Obregón
Abstract:
Deaf signers usually show lower scores in Auditory Working Memory (AWM) tasks and higher scores in Visual Working Memory (VWM) tasks than their hearing pairs. Further, Working Memory has been correlated with reading abilities and vocabulary in Deaf and Hearing individuals. The aim of the present study is to compare the performance of Mexican Deaf signers and hearing adults in VWM, reading and Vocabulary tasks and observe if the latter are correlated to the former. 15 Mexican Deaf signers were assessed using the Corsi block test for VWM, four different subtests of PROLEC (Batería de Evaluación de los Procesos Lectores) for reading abilities, and the LexTale in its Spanish version for vocabulary. T-tests show significant differences between groups for VWM and Vocabulary but not for all the PROLEC subtests. A significant Pearson correlation was found between VWM and Vocabulary but not between VWM and reading abilities. This work is part of a larger research study and results are not yet conclusive. A discussion about the use of PROLEC as a tool to explore reading abilities in a Deaf population is included.Keywords: deaf signers, visual working memory, reading, Mexican sign language
Procedia PDF Downloads 16925048 Machine Learning Methods for Flood Hazard Mapping
Authors: Stefano Zappacosta, Cristiano Bove, Maria Carmela Marinelli, Paola di Lauro, Katarina Spasenovic, Lorenzo Ostano, Giuseppe Aiello, Marco Pietrosanto
Abstract:
This paper proposes a novel neural network approach for assessing flood hazard mapping. The core of the model is a machine learning component fed by frequency ratios, namely statistical correlations between flood event occurrences and a selected number of topographic properties. The proposed hybrid model can be used to classify four different increasing levels of hazard. The classification capability was compared with the flood hazard mapping River Basin Plans (PAI) designed by the Italian Institute for Environmental Research and Defence, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale). The study area of Piemonte, an Italian region, has been considered without loss of generality. The frequency ratios may be used as a standalone block to model the flood hazard mapping. Nevertheless, the mixture with a neural network improves the classification power of several percentage points, and may be proposed as a basic tool to model the flood hazard map in a wider scope.Keywords: flood modeling, hazard map, neural networks, hydrogeological risk, flood risk assessment
Procedia PDF Downloads 18125047 Evaluating Alternative Structures for Prefix Trees
Authors: Feras Hanandeh, Izzat Alsmadi, Muhammad M. Kwafha
Abstract:
Prefix trees or tries are data structures that are used to store data or index of data. The goal is to be able to store and retrieve data by executing queries in quick and reliable manners. In principle, the structure of the trie depends on having letters in nodes at the different levels to point to the actual words in the leafs. However, the exact structure of the trie may vary based on several aspects. In this paper, we evaluated different structures for building tries. Using datasets of words of different sizes, we evaluated the different forms of trie structures. Results showed that some characteristics may impact significantly, positively or negatively, the size and the performance of the trie. We investigated different forms and structures for the trie. Results showed that using an array of pointers in each level to represent the different alphabet letters is the best choice.Keywords: data structures, indexing, tree structure, trie, information retrieval
Procedia PDF Downloads 45225046 Data Management System for Environmental Remediation
Authors: Elizaveta Petelina, Anton Sizo
Abstract:
Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.Keywords: data management, environmental remediation, geographic information system, GIS, decision making
Procedia PDF Downloads 16325045 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data
Authors: Bharat Singh Om Prakash Vyas
Abstract:
Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.Keywords: ALS, NMF, high dimensional data, RMSE
Procedia PDF Downloads 343