Search results for: data utilization
25272 Evaluating Alternative Structures for Prefix Trees
Authors: Feras Hanandeh, Izzat Alsmadi, Muhammad M. Kwafha
Abstract:
Prefix trees or tries are data structures that are used to store data or index of data. The goal is to be able to store and retrieve data by executing queries in quick and reliable manners. In principle, the structure of the trie depends on having letters in nodes at the different levels to point to the actual words in the leafs. However, the exact structure of the trie may vary based on several aspects. In this paper, we evaluated different structures for building tries. Using datasets of words of different sizes, we evaluated the different forms of trie structures. Results showed that some characteristics may impact significantly, positively or negatively, the size and the performance of the trie. We investigated different forms and structures for the trie. Results showed that using an array of pointers in each level to represent the different alphabet letters is the best choice.Keywords: data structures, indexing, tree structure, trie, information retrieval
Procedia PDF Downloads 45225271 Data Management System for Environmental Remediation
Authors: Elizaveta Petelina, Anton Sizo
Abstract:
Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.Keywords: data management, environmental remediation, geographic information system, GIS, decision making
Procedia PDF Downloads 16125270 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data
Authors: Bharat Singh Om Prakash Vyas
Abstract:
Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.Keywords: ALS, NMF, high dimensional data, RMSE
Procedia PDF Downloads 34225269 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 12425268 A Structure-Based Approach for Adaptable Building System
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
Existing buildings are permanently subjected to change, continuously renovated and repaired in their long service life. Old buildings are destroyed and their material and components are recycled or reused for constructing new ones. In this process, importance of sustainability principles for building construction is obviously known and great significance must be attached to consumption of resources, resulting effects on the environment and economic costs. Utilization strategies for extending buildings service life and delay in destroying have positive effect on environment protection. In addition, simpler alterability or expandability of buildings’ structures and reducing energy and natural resources consumption have benefits for users, producers and environment. To solve these problems, by applying theories of open building, structural components of some conventional building systems have been analyzed and then, a new geometry adaptive building system is developed which can transform and support different imposed loads. In order to achieve this goal, various research methods and tools such as professional and scientific literatures review, comparative analysis, case study and computer simulation were applied and data interpretation was implemented using descriptive statistics and logical arguments. Therefore, hypothesis and proposed strategies were evaluated and an adaptable and reusable 2-dimensional building system was presented which can respond appropriately to dwellers and end-users needs and provide reusability of structural components of building system in new construction or function. Investigations showed that this incremental building system can be successfully applied in achieving the architectural design objectives and by small modifications on components and joints, it is easy to obtain different and adaptable load-optimized component alternatives for flexible spaces.Keywords: adaptability, durability, open building, service life, structural building system
Procedia PDF Downloads 58125267 Analysis of Cooperative Learning Behavior Based on the Data of Students' Movement
Authors: Wang Lin, Li Zhiqiang
Abstract:
The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.Keywords: behavior pattern, cooperative learning, data analyze, k-means clustering algorithm
Procedia PDF Downloads 18725266 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources
Authors: Annisa Ulfah Pristya, Andi Setiawan
Abstract:
Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.Keywords: CNT, efficiency, electric, microorganisms, sediment
Procedia PDF Downloads 40925265 Direct Drive Double Fed Wind Generator
Authors: Vlado Ostovic
Abstract:
An electric machine topology characterized by single tooth winding in both stator and rotor is presented. The proposed machine is capable of operating as a direct drive double fed wind generator (DDDF, D3F) because it requires no gearbox and only a reduced-size converter. A wind turbine drive built around a D3F generator is cheaper to manufacture, requires less maintenance, and has a higher energy yield than its conventional counterparts. The single tooth wound generator of a D3F turbine has superb volume utilization and lower stator I2R losses due to its extremely short-end windings. Both stator and rotor of a D3F generator can be manufactured in segments, which simplifies its assembly and transportation to the site, and makes production cheaper.Keywords: direct drive, double fed generator, gearbox, permanent magnet generators, single tooth winding, wind power
Procedia PDF Downloads 19025264 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10825263 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 39025262 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds
Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan
Abstract:
This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.Keywords: African walnut, functional properties, proximate analysis, response surface methodology
Procedia PDF Downloads 39625261 Nanoparticle Induced Neurotoxicity Mediated by Mitochondria
Authors: Nandini Nalika, Suhel Parvez
Abstract:
Nanotechnology has emerged to play a vital role in developing all through the industrial world with an immense production of nanomaterials including nanoparticles (NPs). Many toxicological studies have confirmed that due to unique small size and physico-chemical properties of NPs (1-100nm), they can be potentially hazardous. Metallic NPs of small size have been shown to induce higher levels of cellular oxidative stress and can easily pass through the Blood Brain Barrier (BBB) and significantly accumulate in brain. With the wide applications of titanium dioxide nanoparticles (TNPs) in day-to-day life in form of cosmetics, paints, sterilisation and so on, there is growing concern regarding the deleterious effects of TNPs on central nervous system and mitochondria appear to be important cellular organelles targeted to the pro-oxidative effects of NPs and an important source that contribute significantly for the production of reactive oxygen species after some toxicity or an injury. The aim of our study was to elucidate the effect of TNPs in anatase form with different concentrations (5-50 µg/ml) following with various oxidative stress markers in isolated brain mitochondria as an in vitro model. Oxidative stress was determined by measuring the different oxidative stress markers like lipid peroxidation as well as the protein carbonyl content which was found to be significantly increased. Reduced glutathione content and major glutathione metabolizing enzymes were also modulated signifying the role of glutathione redox cycle in the pathophysiology of TNPs. The study also includes the mitochondrial enzymes (Complex 1, Complex II, complex IV, Complex V ) and the enzymes showed toxicity in a relatively short time due to the effect of TNPs. The study provide a range of concentration that were toxic to the neuronal cells and data pointing to a general toxicity in brain mitochondria by TNPs, therefore, it is in need to consider the proper utilization of NPs in the environment.Keywords: mitochondria, nanoparticles, brain, in vitro
Procedia PDF Downloads 39825260 The Study on Life of Valves Evaluation Based on Tests Data
Authors: Binjuan Xu, Qian Zhao, Ping Jiang, Bo Guo, Zhijun Cheng, Xiaoyue Wu
Abstract:
Astronautical valves are key units in engine systems of astronautical products; their reliability will influence results of rocket or missile launching, even lead to damage to staff and devices on the ground. Besides failure in engine system may influence the hitting accuracy and flight shot of missiles. Therefore high reliability is quite essential to astronautical products. There are quite a few literature doing research based on few failure test data to estimate valves’ reliability, thus this paper proposed a new method to estimate valves’ reliability, according to the corresponding tests of different failure modes, this paper takes advantage of tests data which acquired from temperature, vibration, and action tests to estimate reliability in every failure modes, then this paper has regarded these three kinds of tests as three stages in products’ process to integrate these results to acquire valves’ reliability. Through the comparison of results achieving from tests data and simulated data, the results have illustrated how to obtain valves’ reliability based on the few failure data with failure modes and prove that the results are effective and rational.Keywords: censored data, temperature tests, valves, vibration tests
Procedia PDF Downloads 34625259 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions
Procedia PDF Downloads 30625258 Adaptability of Steel-Framed Industrialized Building System
Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi
Abstract:
Existing buildings are permanently subjected to change, continuously renovated and repaired in their long service life. Old buildings are destroyed and their material and components are recycled or reused for constructing new ones. In this process, importance of sustainability principles for building construction is obviously known and great significance must be attached to consumption of resources, resulting effects on the environment and economic costs. Utilization strategies for extending buildings service life and delay in destroying have positive effect on environment protection. In addition, simpler alterability or expandability of buildings’ structures and reducing energy and natural resources consumption have benefits for users, producers and environment. To solve these problems, by applying theories of open building, structural components of some conventional building systems have been analyzed and then, a new geometry adaptive building system is developed which can transform and support different imposed loads. In order to achieve this goal, various research methods and tools such as professional and scientific literatures review, comparative analysis, case study and computer simulation were applied and data interpretation was implemented using descriptive statistics and logical arguments. Therefore, hypothesis and proposed strategies were evaluated and an adaptable and reusable 2-dimensional building system was presented which can respond appropriately to dwellers and end-users needs and provide reusability of structural components of building system in new construction or function. Investigations showed that this incremental building system can be successfully applied in achieving the architectural design objectives and by small modifications on components and joints, it is easy to obtain different and adaptable load-optimized component alternatives for flexible spaces.Keywords: adaptability, durability, open building, service life, structural building system
Procedia PDF Downloads 36625257 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability
Procedia PDF Downloads 29025256 Changing Arbitrary Data Transmission Period by Using Bluetooth Module on Gas Sensor Node of Arduino Board
Authors: Hiesik Kim, Yong-Beom Kim, Jaheon Gu
Abstract:
Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to rate up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group (SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with a different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as open source hardware, gas sensor, and Bluetooth module and algorithm controlling transmission rate is demonstrated. Experiment controlling transmission rate also is progressed by developing Android application receiving measured data, and controlling this rate is available at the experiment result. It is important that in the future, improvement for communication algorithm be needed because a few error occurs when data is transferred or received.Keywords: Arduino, Bluetooth, gas sensor, IoT, transmission
Procedia PDF Downloads 27825255 Real-Time Sensor Fusion for Mobile Robot Localization in an Oil and Gas Refinery
Authors: Adewole A. Ayoade, Marshall R. Sweatt, John P. H. Steele, Qi Han, Khaled Al-Wahedi, Hamad Karki, William A. Yearsley
Abstract:
Understanding the behavioral characteristics of sensors is a crucial step in fusing data from several sensors of different types. This paper introduces a practical, real-time approach to integrate heterogeneous sensor data to achieve higher accuracy than would be possible from any one individual sensor in localizing a mobile robot. We use this approach in both indoor and outdoor environments and it is especially appropriate for those environments like oil and gas refineries due to their sparse and featureless nature. We have studied the individual contribution of each sensor data to the overall combined accuracy achieved from the fusion process. A Sequential Update Extended Kalman Filter(EKF) using validation gates was used to integrate GPS data, Compass data, WiFi data, Inertial Measurement Unit(IMU) data, Vehicle Velocity, and pose estimates from Fiducial marker system. Results show that the approach can enable a mobile robot to navigate autonomously in any environment using a priori information.Keywords: inspection mobile robot, navigation, sensor fusion, sequential update extended Kalman filter
Procedia PDF Downloads 47225254 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 14225253 Investigation into Varied Inspection Utilization for Mass Customization
Authors: Trishen Naidoo, Anthony Walker, Shaniel Davrajh, Glen Bright
Abstract:
An investigation into on-line inspection was performed where research is focused on the use of varied inspection (as opposed to 100% inspection) for mass customization (MC). Manufacturers need new methods for quality control in mass customization, and these methods need to address some of the old problems such as over-inspection and bottlenecking. Due to the risks of varied inspection, many manufacturers do not implement it and rather opt for sampling methods. However, there are many advantages of varied inspection and can have applications in mass customization. A control system incorporating fuzzy logic (FL) control is used to perform the variations in inspection usage in a simulated environment. The proposed system can have a key impact in appraisal costs reduction and possibly work-in-process reduction in high variety environments.Keywords: appraisal costs, fuzzy logic, quality control, work-in-process
Procedia PDF Downloads 23125252 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme
Authors: Shahram Jamali, Samira Hamed
Abstract:
One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.Keywords: active queue management, RED, Markov model, random early detection algorithm
Procedia PDF Downloads 53925251 Assessing the Impact of Frailty in Elderly Patients Undergoing Emergency Laparotomies in Singapore
Authors: Zhao Jiashen, Serene Goh, Jerry Goo, Anthony Li, Lim Woan Wui, Paul Drakeford, Chen Qing Yan
Abstract:
Introduction: Emergency laparotomy (EL) is one of the most common surgeries done in Singapore to treat acute abdominal pathologies. A significant proportion of these surgeries are performed in the geriatric population (65 years and older), who tend to have the highest postoperative morbidity, mortality, and highest utilization of intensive care resources. Frailty, the state of vulnerability to adverse outcomes from an accumulation of physiological deficits, has been shown to be associated with poorer outcomes after surgery and remains a strong driver of healthcare utilization and costs. To date, there is little understanding of the impact it has on emergency laparotomy outcomes. The objective of this study is to examine the impact of frailty on postoperative morbidity, mortality, and length of stay after EL. Methods: A retrospective study was conducted in two tertiary centres in Singapore, Tan Tock Seng Hospital and Khoo Teck Puat Hospital the period from January to December 2019. Patients aged 65 years and above who underwent emergency laparotomy for intestinal obstruction, perforated viscus, bowel ischaemia, adhesiolysis, gastrointestinal bleed, or another suspected acute abdomen were included. Laparotomies performed for trauma, cholecystectomy, appendectomy, vascular surgery, and non-GI surgery were excluded. The Clinical Frailty Score (CFS) developed by the Canadian Study of Health and Aging (CSHA) was used. A score of 1 to 4 was defined as non-frail and 5 to 7 as frail. We compared the clinical outcomes of elderly patients in the frail and non-frail groups. Results: There were 233 elderly patients who underwent EL during the study period. Up to 26.2% of patients were frail. Patients who were frail (CFS 5-9) tend to be older, 79 ± 7 vs 79 ± 5 years of age, p <0.01. Gender distribution was equal in both groups. Indication for emergency laparotomies, time from diagnosis to surgery, and presence of consultant surgeons and anaesthetists in the operating theatre were comparable (p>0.05). Patients in the frail group were more likely to receive postoperative geriatric assessment than in the non-frail group, 49.2% vs. 27.9% (p<0.01). The postoperative complications were comparable (p>0.05). The length of stay in the critical care unit was longer for the frail patients, 2 (IQR 1-6.5) versus 1 (IQR 0-4) days, p<0.01. Frailty was found to be an independent predictor of 90-day mortality but not age, OR 2.9 (1.1-7.4), p=0.03. Conclusion: Up to one-fourth of the elderly who underwent EL were frail. Patients who were frail were associated with a longer length of stay in the critical care unit and a 90-day mortality rate of more than three times that of their non-frail counterparts. PPOSSUM was a better predictor of 90-day mortality in the non-frail group than in the frail group. As frailty scoring was a significant predictor of 90-day mortality, its integration into acute surgical units to facilitate shared decision-making and discharge planning should be considered.Keywords: frailty elderly, emergency, laparotomy
Procedia PDF Downloads 14525250 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 5925249 An Automated Approach to Consolidate Galileo System Availability
Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt
Abstract:
Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.Keywords: availability, data quality, system performance, Galileo, aerospace
Procedia PDF Downloads 16725248 Carbon Dioxide Capture and Utilization by Using Seawater-Based Industrial Wastewater and Alkanolamine Absorbents
Authors: Dongwoo Kang, Yunsung Yoo, Injun Kim, Jongin Lee, Jinwon Park
Abstract:
Since industrial revolution, energy usage by human-beings has been drastically increased resulting in the enormous emissions of carbon dioxide into the atmosphere. High concentration of carbon dioxide is well recognized as the main reason for the climate change by breaking the heat equilibrium of the earth. In order to decrease the amount of carbon dioxide emission, lots of technologies have been developed. One of the methods is to capture carbon dioxide after combustion process using liquid type absorbents. However, for some nations, captured carbon dioxide cannot be treated and stored properly due to their geological structures. Also, captured carbon dioxide can be leaked out when crust activities are active. Hence, the method to convert carbon dioxide as stable and useful products were developed. It is usually called CCU, that is, Carbon Capture and Utilization. There are several ways to convert carbon dioxide into useful substances. For example, carbon dioxide can be converted and used as fuels such as diesel, plastics, and polymers. However, these types of technologies require lots of energy to make stable carbon dioxide into a reactive one. Hence, converting it into metal carbonates salts have been studied widely. When carbon dioxide is captured by alkanolamine-based liquid absorbents, it exists as ionic forms such as carbonate, carbamate, and bicarbonate. When adequate metal ions are added, metal carbonate salt can be produced by ionic reaction with fast reaction kinetics. However, finding metal sources can be one of the problems for this method to be commercialized. If natural resources such as calcium oxide were used to supply calcium ions, it is not thought to have the economic feasibility to use natural resources to treat carbon dioxide. In this research, high concentrated industrial wastewater produced from refined salt production facility have been used as metal supplying source, especially for calcium cations. To ensure purity of final products, calcium ions were selectively separated in the form of gypsum dihydrate. After that, carbon dioxide is captured using alkanolamine-based absorbents making carbon dioxide into reactive ionic form. And then, high purity calcium carbonate salt was produced. The existence of calcium carbonate was confirmed by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) images. Also, carbon dioxide loading curves for absorption, conversion, and desorption were provided. Also, in order to investigate the possibility of the absorbent reuse, reabsorption experiments were performed either. Produced calcium carbonate as final products is seemed to have potential to be used in various industrial fields including cement and paper making industries and pharmaceutical engineering fields.Keywords: alkanolamine, calcium carbonate, climate change, seawater, industrial wastewater
Procedia PDF Downloads 18525247 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 32325246 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors
Authors: Louise Ödlund, Danica Djuric Ilic
Abstract:
Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.Keywords: energy system, district heating, sustainable business strategies, sustainable development
Procedia PDF Downloads 16925245 The Impact of the General Data Protection Regulation on Human Resources Management in Schools
Authors: Alexandra Aslanidou
Abstract:
The General Data Protection Regulation (GDPR), concerning the protection of natural persons within the European Union with regard to the processing of personal data and on the free movement of such data, became applicable in the European Union (EU) on 25 May 2018 and transformed the way personal data were being treated under the Data Protection Directive (DPD) regime, generating sweeping organizational changes to both public sector and business. A social practice that is considerably influenced in the way of its day-to-day operations is Human Resource (HR) management, for which the importance of GDPR cannot be underestimated. That is because HR processes personal data coming in all shapes and sizes from many different systems and sources. The significance of the proper functioning of an HR department, specifically in human-centered, service-oriented environments such as the education field, is decisive due to the fact that HR operations in schools, conducted effectively, determine the quality of the provided services and consequently have a considerable impact on the success of the educational system. The purpose of this paper is to analyze the decisive role that GDPR plays in HR departments that operate in schools and in order to practically evaluate the aftermath of the Regulation during the first months of its applicability; a comparative use cases analysis in five highly dynamic schools, across three EU Member States, was attempted.Keywords: general data protection regulation, human resource management, educational system
Procedia PDF Downloads 10025244 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 15725243 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 297