Search results for: minimum data set
25415 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data
Authors: Bharat Singh Om Prakash Vyas
Abstract:
Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.Keywords: ALS, NMF, high dimensional data, RMSE
Procedia PDF Downloads 34225414 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 12425413 Weed Out the Bad Seeds: The Impact of Strategic Portfolio Management on Patent Quality
Authors: A. Lefebre, M. Willekens, K. Debackere
Abstract:
Since the 1990s, patent applications have been booming, especially in the field of telecommunications. However, this increase in patent filings has been associated with an (alleged) decrease in patent quality. The plethora of low-quality patents devalues the high-quality ones, thus weakening the incentives for inventors to patent inventions. Despite the rich literature on strategic patenting, previous research has neglected to emphasize the importance of patent portfolio management and its impact on patent quality. In this paper, we compare related patent portfolios vs. nonrelated patents and investigate whether the patent quality and innovativeness differ between the two types. In the analyses, patent quality is proxied by five individual proxies (number of inventors, claims, renewal years, designated states, and grant lag), and these proxies are then aggregated into a quality index. Innovativeness is proxied by two measures: the originality and radicalness index. Results suggest that related patent portfolios have, on average, a lower patent quality compared to nonrelated patents, thus suggesting that firms use them for strategic purposes rather than for the extended protection they could offer. Even upon testing the individual proxies as a dependent variable, we find evidence that related patent portfolios are of lower quality compared to nonrelated patents, although not all results show significant coefficients. Furthermore, these proxies provide evidence of the importance of adding fixed effects to the model. Since prior research has found that these proxies are inherently flawed and never fully capture the concept of patent quality, we have chosen to run the analyses with individual proxies as supplementary analyses; however, we stick with the comprehensive index as our main model. This ensures that the results are not dependent upon one certain proxy but allows for multiple views of the concept. The presence of divisional applications might be linked to the level of innovativeness of the underlying invention. It could be the case that the parent application is so important that firms are going through the administrative burden of filing for divisional applications to ensure the protection of the invention and the preemption of competition. However, it could also be the case that the preempting is a result of divisional applications being used strategically as a backup plan and prolonging strategy, thus negatively impacting the innovation in the portfolio. Upon testing the level of novelty and innovation in the related patent portfolios by means of the originality and radicalness index, we find evidence for a significant negative association with related patent portfolios. The minimum innovation that has been brought on by the patents in the related patent portfolio is lower compared to the minimum innovation that can be found in nonrelated portfolios, providing evidence for the second argument.Keywords: patent portfolio management, patent quality, related patent portfolios, strategic patenting
Procedia PDF Downloads 9425412 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces
Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani
Abstract:
A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.Keywords: readiness, maturity, system, integration
Procedia PDF Downloads 9225411 Analysis of Cooperative Learning Behavior Based on the Data of Students' Movement
Authors: Wang Lin, Li Zhiqiang
Abstract:
The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.Keywords: behavior pattern, cooperative learning, data analyze, k-means clustering algorithm
Procedia PDF Downloads 18725410 Fibrin Glue Reinforcement of Choledochotomy Closure Suture Line for Prevention of Bile Leak in Patients Undergoing Laparoscopic Common Bile Duct Exploration with Primary Closure: A Pilot Study
Authors: Rahul Jain, Jagdish Chander, Anish Gupta
Abstract:
Introduction: Laparoscopic common bile duct exploration (LCBDE) allows cholecystectomy and the removal of common bile duct (CBD) stones to be performed during the same sitting, thereby decreasing hospital stay. CBD exploration through choledochotomy can be closed primarily with an absorbable suture material, but can lead to biliary leakage postoperatively. In this study we tried to find a solution to further lower the incidence of bile leakage by using fibrin glue to reinforce the sutures put on choledochotomy suture line. It has haemostatic and sealing action, through strengthening the last step of the physiological coagulation and biostimulation, which favours the formation of new tissue matrix. Methodology: This study was conducted at a tertiary care teaching hospital in New Delhi, India, from 2011 to 2013. 20 patients with CBD stones documented on MRCP with CBD diameter of 9 mm or more were included in this study. Patients were randomized into two groups namely Group A in which choledochotomy was closed with polyglactin 4-0 suture and suture line reinforced with fibrin glue, and Group ‘B’ in which choledochotomy was closed with polyglactin 4-0 suture alone. Both the groups were evaluated and compared on clinical parameters such as operative time, drain content, drain output, no. of days drain was required, blood loss & transfusion requirements, length of postoperative hospital stay and conversion to open surgery. Results: The operative time for Group A ranged from 60 to 210 min (mean 131.50 min) and Group B 65 to 300 min (mean 140 minutes). The blood loss in group A ranged from 10 to 120 ml (mean 51.50 ml), in group B it ranged from 10 to 200 ml (mean 53.50 ml). In Group A, there was no case of bile leak but there was bile leak in 2 cases in Group B, minimum 0 and maximum 900 ml with a mean of 97 ml and p value of 0.147 with no statistically significant difference in bile leak in test and control groups. The minimum and maximum serous drainage in Group A was nil & 80 ml (mean 11 ml) and in Group B was nil & 270 ml (mean 72.50 ml). The p value came as 0.028 which is statistically significant. Thus serous leakage in Group A was significantly less than in Group B. The drains in Group A were removed from 2 to 4 days (mean: 3 days) while in Group B from 2 to 9 days (mean: 3.9 days). The patients in Group A stayed in hospital post operatively from 3 to 8 days (mean: 5.30) while in Group B it ranged from 3 to 10 days with a mean of 5 days. Conclusion: Fibrin glue application on CBD decreases bile leakage but in statistically insignificant manner. Fibrin glue application on CBD can significantly decrease post operative serous drainage after LCBDE. Fibrin glue application on CBD is safe and easy technique without any significant adverse effects and can help less experienced surgeons performing LCBDE.Keywords: bile leak, fibrin glue, LCBDE, serous leak
Procedia PDF Downloads 21525409 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10825408 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 39025407 The Application of Enzymes on Pharmaceutical Products and Process Development
Authors: Reginald Anyanwu
Abstract:
Enzymes are biological molecules that significantly regulate the rate of almost all of the chemical reactions that take place within cells, and have been widely used for products’ innovations. They are vital for life and serve a wide range of important functions in the body, such as aiding in digestion and metabolism. The present study was aimed at finding out the extent to which biological molecules have been utilized by pharmaceutical, food and beverage, and biofuel industries in commercial and scale up applications. Taking into account the escalating business opportunities in this vertical, biotech firms have also been penetrating enzymes industry especially that of food. The aim of the study therefore was to find out how biocatalysis can be successfully deployed; how enzyme application can improve industrial processes. To achieve the purpose of the study, the researcher focused on the analytical tools that are critical for the scale up implementation of enzyme immobilization to ascertain the extent of increased product yield at minimum logistical burden and maximum market profitability on the environment and user. The researcher collected data from four pharmaceutical companies located at Anambra state and Imo state of Nigeria. Questionnaire items were distributed to these companies. The researcher equally made a personal observation on the applicability of these biological molecules on innovative Products since there is now shifting trends toward the consumption of healthy and quality food. In conclusion, it was discovered that enzymes have been widely used for products’ innovations but there are however variations on their applications. It was also found out that pivotal contenders of enzymes market have lately been making heavy investments in the development of innovative product solutions. It was recommended that the applications of enzymes on innovative products should be widely practiced.Keywords: enzymes, pharmaceuticals, process development, quality food consumption, scale-up applications
Procedia PDF Downloads 14125406 Influence of Processing Regime and Contaminants on the Properties of Postconsumer Thermoplastics
Authors: Fares Alsewailem
Abstract:
Material recycling of thermoplastic waste offers practical solution for municipal solid waste reduction. Post-consumer plastics such as polyethylene (PE), polyethyleneterephtalate (PET), and polystyrene (PS) may be separated from each other by physical methods such as density difference and hence processed as single plastic, however one should be cautious about the contaminants presence in the waste stream inform of paper, glue, etc. since these articles even in trace amount may deteriorate properties of the recycled plastics especially the mechanical properties. furthermore, melt processing methods used to recycle thermoplastics such as extrusion and compression molding may induce degradation of some of the recycled plastics such as PET and PS. In this research, it is shown that care should be taken when processing recycled plastics by melt processing means in two directions, first contaminants should be extremely minimized, and secondly melt processing steps should also be minimum.Keywords: Recycling, PET, PS, HDPE, mechanical
Procedia PDF Downloads 28425405 Homogeneity and Trend Analyses of Temperature Indices: The Case Study of Umbria Region (Italy) in the Mediterranean Area
Authors: R. Morbidelli, C. Saltalippi, A. Flammini, A. Garcia-Marin, J. L. Ayuso-Munoz
Abstract:
The climate change, mainly due to greenhouse gas emissions associated to human activities, has been modifying hydrologic processes with a direct effect on air surface temperature that has significantly increased in the last century at global scale. In this context the Mediterranean area is considered to be particularly sensitive to the climate change impacts on temperature indices. An analysis finalized to study the evolution of temperature indices and to check the existence of significant trends in the Umbria Region (Italy) is presented. Temperature data were obtained by seven meteorological stations uniformly distributed in the study area and characterized by very long series of temperature observations (at least 60 years) spanning the 1924-2015 period. A set of 39 temperature indices represented by monthly and annual mean, average maximum and average minimum temperatures, has been derived. The trend analysis was realized by applying the non-parametric Mann-Kendall test, while the non-parametric Pettit test and the parametric Standard Normal Homogeneity test (SNHT) were used to check the presence of breakpoints or in-homogeneities due to environmental changes/anthropic activity or climate change effects. The Umbria region, in agreement with other recent studies exploring the temperature behavior in Italy, shows a general increase in all temperature indices, with the only exception of Gubbio site that exhibits very light negative trends or absence of trend. The presence of break points and in-homogeneity was widely explored through the selected tests and the results were checked on the basis of the well-known metadata of the meteorological stations.Keywords: reception theory, reading, literary translation, horizons of expectation, reader
Procedia PDF Downloads 16225404 The Study on Life of Valves Evaluation Based on Tests Data
Authors: Binjuan Xu, Qian Zhao, Ping Jiang, Bo Guo, Zhijun Cheng, Xiaoyue Wu
Abstract:
Astronautical valves are key units in engine systems of astronautical products; their reliability will influence results of rocket or missile launching, even lead to damage to staff and devices on the ground. Besides failure in engine system may influence the hitting accuracy and flight shot of missiles. Therefore high reliability is quite essential to astronautical products. There are quite a few literature doing research based on few failure test data to estimate valves’ reliability, thus this paper proposed a new method to estimate valves’ reliability, according to the corresponding tests of different failure modes, this paper takes advantage of tests data which acquired from temperature, vibration, and action tests to estimate reliability in every failure modes, then this paper has regarded these three kinds of tests as three stages in products’ process to integrate these results to acquire valves’ reliability. Through the comparison of results achieving from tests data and simulated data, the results have illustrated how to obtain valves’ reliability based on the few failure data with failure modes and prove that the results are effective and rational.Keywords: censored data, temperature tests, valves, vibration tests
Procedia PDF Downloads 34525403 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions
Procedia PDF Downloads 30625402 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability
Procedia PDF Downloads 28925401 Changing Arbitrary Data Transmission Period by Using Bluetooth Module on Gas Sensor Node of Arduino Board
Authors: Hiesik Kim, Yong-Beom Kim, Jaheon Gu
Abstract:
Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to rate up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group (SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with a different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as open source hardware, gas sensor, and Bluetooth module and algorithm controlling transmission rate is demonstrated. Experiment controlling transmission rate also is progressed by developing Android application receiving measured data, and controlling this rate is available at the experiment result. It is important that in the future, improvement for communication algorithm be needed because a few error occurs when data is transferred or received.Keywords: Arduino, Bluetooth, gas sensor, IoT, transmission
Procedia PDF Downloads 27825400 Real-Time Sensor Fusion for Mobile Robot Localization in an Oil and Gas Refinery
Authors: Adewole A. Ayoade, Marshall R. Sweatt, John P. H. Steele, Qi Han, Khaled Al-Wahedi, Hamad Karki, William A. Yearsley
Abstract:
Understanding the behavioral characteristics of sensors is a crucial step in fusing data from several sensors of different types. This paper introduces a practical, real-time approach to integrate heterogeneous sensor data to achieve higher accuracy than would be possible from any one individual sensor in localizing a mobile robot. We use this approach in both indoor and outdoor environments and it is especially appropriate for those environments like oil and gas refineries due to their sparse and featureless nature. We have studied the individual contribution of each sensor data to the overall combined accuracy achieved from the fusion process. A Sequential Update Extended Kalman Filter(EKF) using validation gates was used to integrate GPS data, Compass data, WiFi data, Inertial Measurement Unit(IMU) data, Vehicle Velocity, and pose estimates from Fiducial marker system. Results show that the approach can enable a mobile robot to navigate autonomously in any environment using a priori information.Keywords: inspection mobile robot, navigation, sensor fusion, sequential update extended Kalman filter
Procedia PDF Downloads 47225399 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 14225398 Successful Optimization of a Shallow Marginal Offshore Field and Its Applications
Authors: Kumar Satyam Das, Murali Raghunathan
Abstract:
This note discusses the feasibility of field development of a challenging shallow offshore field in South East Asia and how its learnings can be applied to marginal field development across the world especially developing marginal fields in this low oil price world. The field was found to be economically challenging even during high oil prices and the project was put on hold. Shell started development study with the aim to significantly reduce cost through competitively scoping and revive stranded projects. The proposed strategy to achieve this involved Improve Per platform recovery and Reduction in CAPEX. Methodology: Based on various Benchmarking Tool such as Woodmac for similar projects in the region and economic affordability, a challenging target of 50% reduction in unit development cost (UDC) was set for the project. Technical scope was defined to the minimum as to be a wellhead platform with minimum functionality to ensure production. The evaluation of key project decisions like Well location and number, well design, Artificial lift methods and wellhead platform type under different development concept was carried out through integrated multi-discipline approach. Key elements influencing per platform recovery were Wellhead Platform (WHP) location, Well count, well reach and well productivity. Major Findings: Reservoir being shallow posed challenges in well design (dog-leg severity, casing size and the achievable step-out), choice of artificial lift and sand-control method. Integrated approach amongst relevant disciplines with challenging mind-set enabled to achieve optimized set of development decisions. This led to significant improvement in per platform recovery. It was concluded that platform recovery largely depended on the reach of the well. Choice of slim well design enabled designing of high inclination and better productivity wells. However, there is trade-off between high inclination Gas Lift (GL) wells and low inclination wells in terms of long term value, operational complexity, well reach, recovery and uptime. Well design element like casing size, well completion, artificial lift and sand control were added successively over the minimum technical scope design leading to a value and risk staircase. Logical combinations of options (slim well, GL) were competitively screened to achieve 25% reduction in well cost. Facility cost reduction was achieved through sourcing standardized Low Cost Facilities platform in combination with portfolio execution to maximizing execution efficiency; this approach is expected to reduce facilities cost by ~23% with respect to the development costs. Further cost reductions were achieved by maximizing use of existing facilities nearby; changing reliance on existing water injection wells and utilizing existing water injector (W.I.) platform for new injectors. Conclusion: The study provides a spectrum of technically feasible options. It also made clear that different drivers lead to different development concepts and the cost value trade off staircase made this very visible. Scoping of the project through competitive way has proven to be valuable for decision makers by creating a transparent view of value and associated risks/uncertainty/trade-offs for difficult choices: elements of the projects can be competitive, whilst other parts will struggle, even though contributing to significant volumes. Reduction in UDC through proper scoping of present projects and its benchmarking paves as a learning for the development of marginal fields across the world, especially in this low oil price scenario. This way of developing a field has on average a reduction of 40% of cost for the Shell projects.Keywords: benchmarking, full field development, CAPEX, feasibility
Procedia PDF Downloads 15825397 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 5925396 Speeding-up Gray-Scale FIC by Moments
Authors: Eman A. Al-Hilo, Hawraa H. Al-Waelly
Abstract:
In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image.Keywords: fractal gray level image, fractal compression technique, iterated function system, moments feature, zero-mean range-domain block
Procedia PDF Downloads 49225395 An Automated Approach to Consolidate Galileo System Availability
Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt
Abstract:
Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.Keywords: availability, data quality, system performance, Galileo, aerospace
Procedia PDF Downloads 16725394 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 32325393 Perception, Knowledge and Practices on Balanced Diet among Adolescents, Their Parents and Frontline Functionaries in Rural Sites of Banda, Varanasi and Allahabad, Uttar Pradesh,India
Authors: Gunjan Razdan, Priyanka Sreenath, Jagannath Behera, S. K. Mishra, Sunil Mehra
Abstract:
Uttar Pradesh is one of the poor performing states with high Malnutrition and Anaemia among adolescent girls resulting in high MMR, IMR and low birth weight rate. The rate of anaemia among adolescent girls has doubled in the past decade. Adolescents gain around 15-20% of their optimum height, 25-50% of the ideal adult weight and 45% of the skeletal mass by the age of 19. Poor intake of energy, protein and other nutrients is one of the factors for malnutrition and anaemia. METHODS: The cross-sectional survey using a mixed method (quantitative and qualitative) was adopted in this study. The respondents (adolescents, parents and frontline health workers) were selected randomly from 30 villages and surveyed through a semi-structured questionnaire for qualitative information and FGDs and IDIs for qualitative information. A 24 hours dietary recall method was adopted to estimate their dietary practices. A total of 1069 adolescent girls, 1067 boys, 1774 parents and 69 frontline functionaries were covered under the study. Percentages and mean were calculated for quantitative variable, and content analysis was carried out for qualitative data. RESULTS: Over 80 % of parents provided assertions that they understood the term balanced diet and strongly felt that their children were having balanced diet. However, only negligible 1.5 % of parents could correctly recount essential eight food groups and 22% could tell about four groups which was the minimum response expected to say respondents had fair knowledge on a balanced diet. Only 10 percent of parents could tell that balanced diet helps in physical and mental growth and only 2% said it has a protective role. Besides, qualitative data shows that the perception regarding balanced diet is having costly food items like nuts and fruits. The dietary intake of adolescents is very low despite the increased iron needs associated with physical growth and puberty.The consumption of green leafy vegetables (less than 35 %) and citrus fruits (less than 50%) was found to be low. CONCLUSIONS: The assertions on an understanding of term balanced diet are contradictory to the actual knowledge and practices. Knowledge on essential food groups and nutrients is crucial to inculcate healthy eating practices among adolescents. This calls for comprehensive communication efforts to improve the knowledge and dietary practices among adolescents.Keywords: anemia, knowledge, malnutrition, perceptions
Procedia PDF Downloads 40025392 Identification of Rice Quality Using Gas Sensors and Neural Networks
Authors: Moh Hanif Mubarok, Muhammad Rivai
Abstract:
The public's response to quality rice is very high. So it is necessary to set minimum standards in checking the quality of rice. Most rice quality measurements still use manual methods, which are prone to errors due to limited human vision and the subjectivity of testers. So, a gas detection system can be a solution that has high effectiveness and subjectivity for solving current problems. The use of gas sensors in testing rice quality must pay attention to several parameters. The parameters measured in this research are the percentage of rice water content, gas concentration, output voltage, and measurement time. Therefore, this research was carried out to identify carbon dioxide (CO₂), nitrous oxide (N₂O) and methane (CH₄) gases in rice quality using a series of gas sensors using the Neural Network method.Keywords: carbon dioxide, dinitrogen oxide, methane, semiconductor gas sensor, neural network
Procedia PDF Downloads 4825391 The Impact of the General Data Protection Regulation on Human Resources Management in Schools
Authors: Alexandra Aslanidou
Abstract:
The General Data Protection Regulation (GDPR), concerning the protection of natural persons within the European Union with regard to the processing of personal data and on the free movement of such data, became applicable in the European Union (EU) on 25 May 2018 and transformed the way personal data were being treated under the Data Protection Directive (DPD) regime, generating sweeping organizational changes to both public sector and business. A social practice that is considerably influenced in the way of its day-to-day operations is Human Resource (HR) management, for which the importance of GDPR cannot be underestimated. That is because HR processes personal data coming in all shapes and sizes from many different systems and sources. The significance of the proper functioning of an HR department, specifically in human-centered, service-oriented environments such as the education field, is decisive due to the fact that HR operations in schools, conducted effectively, determine the quality of the provided services and consequently have a considerable impact on the success of the educational system. The purpose of this paper is to analyze the decisive role that GDPR plays in HR departments that operate in schools and in order to practically evaluate the aftermath of the Regulation during the first months of its applicability; a comparative use cases analysis in five highly dynamic schools, across three EU Member States, was attempted.Keywords: general data protection regulation, human resource management, educational system
Procedia PDF Downloads 10025390 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 15725389 Assessment of Land Use Land Cover Change-Induced Climatic Effects
Authors: Mahesh K. Jat, Ankan Jana, Mahender Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) are used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: LULC, sensible heat flux, latent heat flux, SEBAL, landsat, precipitation, temperature
Procedia PDF Downloads 11625388 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 24225387 Predicting Groundwater Areas Using Data Mining Techniques: Groundwater in Jordan as Case Study
Authors: Faisal Aburub, Wael Hadi
Abstract:
Data mining is the process of extracting useful or hidden information from a large database. Extracted information can be used to discover relationships among features, where data objects are grouped according to logical relationships; or to predict unseen objects to one of the predefined groups. In this paper, we aim to investigate four well-known data mining algorithms in order to predict groundwater areas in Jordan. These algorithms are Support Vector Machines (SVMs), Naïve Bayes (NB), K-Nearest Neighbor (kNN) and Classification Based on Association Rule (CBA). The experimental results indicate that the SVMs algorithm outperformed other algorithms in terms of classification accuracy, precision and F1 evaluation measures using the datasets of groundwater areas that were collected from Jordanian Ministry of Water and Irrigation.Keywords: classification, data mining, evaluation measures, groundwater
Procedia PDF Downloads 28025386 Optimal Scheduling for Energy Storage System Considering Reliability Constraints
Authors: Wook-Won Kim, Je-Seok Shin, Jin-O Kim
Abstract:
This paper propose the method for optimal scheduling for battery energy storage system with reliability constraint of energy storage system in reliability aspect. The optimal scheduling problem is solved by dynamic programming with proposed transition matrix. Proposed optimal scheduling method guarantees the minimum fuel cost within specific reliability constraint. For evaluating proposed method, the timely capacity outage probability table (COPT) is used that is calculated by convolution of probability mass function of each generator. This study shows the result of optimal schedule of energy storage system.Keywords: energy storage system (ESS), optimal scheduling, dynamic programming, reliability constraints
Procedia PDF Downloads 407