Search results for: missing data estimation
24127 Cloud Shield: Model to Secure User Data While Using Content Delivery Network Services
Authors: Rachna Jain, Sushila Madan, Bindu Garg
Abstract:
Cloud computing is the key powerhouse in numerous organizations due to shifting of their data to the cloud environment. In recent years it has been observed that cloud-based-services are being used on large scale for content storage, distribution and processing. Various issues have been observed in cloud computing environment that need to be addressed. Security and privacy are found topmost concern area. In this paper, a novel security model is proposed to secure data by utilizing CDN services like image to icon conversion. CDN Service is a content delivery service which converts an image to icon, word to pdf & Latex to pdf etc. Presented model is used to convert an image into icon by keeping image secret. Here security of image is imparted so that image should be encrypted and decrypted by data owners only. It is also discussed in the paper that how server performs multiplication and selection on encrypted data without decryption. The data can be image file, word file, audio or video file. Moreover, the proposed model is capable enough to multiply images, encrypt them and send to a server application for conversion. Eventually, the prime objective is to encrypt an image and convert the encrypted image to image Icon by utilizing homomorphic encryption.Keywords: cloud computing, user data security, homomorphic encryption, image multiplication, CDN service
Procedia PDF Downloads 33724126 Data Mining Approach: Classification Model Evaluation
Authors: Lubabatu Sada Sodangi
Abstract:
The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset
Procedia PDF Downloads 37924125 Developing an Information Model of Manufacturing Process for Sustainability
Authors: Jae Hyun Lee
Abstract:
Manufacturing companies use life-cycle inventory databases to analyze sustainability of their manufacturing processes. Life cycle inventory data provides reference data which may not be accurate for a specific company. Collecting accurate data of manufacturing processes for a specific company requires enormous time and efforts. An information model of typical manufacturing processes can reduce time and efforts to get appropriate reference data for a specific company. This paper shows an attempt to build an abstract information model which can be used to develop information models for specific manufacturing processes.Keywords: process information model, sustainability, OWL, manufacturing
Procedia PDF Downloads 43124124 Embolism: How Changes in Xylem Sap Surface Tension Affect the Resistance against Hydraulic Failure
Authors: Adriano Losso, Birgit Dämon, Stefan Mayr
Abstract:
In vascular plants, water flows from roots to leaves in a metastable state, and even a small perturbation of the system can lead a sudden transition from the liquid to the vapor phase, resulting in xylem embolism (cavitation). Xylem embolism, induced by drought stress and/or freezing stress is caused by the aspiration of gaseous bubbles into xylem conduits from adjacent gas-filled compartments through pit membrane pores (‘air seeding’). At water potentials less negative than the threshold for air seeding, the surface tension (γ) stabilizes the air-water interface and thus prevents air from passing the pit pores. This hold is probably also true for conifers, where this effect occurs at the edge of the sealed torus. Accordingly, it was experimentally demonstrated that γ influences air seeding, but information on the relevance of this effect under field conditions is missing. In this study, we analyzed seasonal changes in γ of the xylem sap in two conifers growing at the alpine timberline (Picea abies and Pinus mugo). In addition, cut branches were perfused (40 min perfusion at 0.004 MPa) with different γ solutions (i.e. distilled and degassed water, 2, 5 and 15% (v/v) ethanol-water solution corresponding to a γ of 74, 65, 55 and 45 mN m-1, respectively) and their vulnerability to drought-induced embolism analyzed via the centrifuge technique (Cavitron). In both species, xylem sap γ changed considerably (ca. 53-67 and ca. 50-68 mN m-1 in P. abies and P. cembra, respectively) over the season. Branches perfused with low γ solutions showed reduced resistance against drought-induced embolism in both species. A significant linear relationship (P < 0.001) between P12, P50 and P88 (i.e. water potential at 12, 50 and 88% of the loss of conductivity) and xylem sap γ was found. Based on this correlation, a variation in P50 between -3.10 and -3.83 MPa (P. abies) and between -3.21 and -4.11 MPa (P. mugo) over the season could be estimated. Results demonstrate that changes in γ of the xylem sap can considerably influence a tree´s resistance to drought-induced embolism. They indicate that vulnerability analyses, normally conducted at a γ near that of pure water, might often underestimate vulnerabilities under field conditions. For studied timberline conifers, seasonal changes in γ might be especially relevant in winter, when frost drought and freezing stress can lead to an excessive embolism.Keywords: conifers, Picea abies, Pinus mugo, timberline
Procedia PDF Downloads 29624123 Status of Reintroduced Houbara Bustard Chlamydotis macqueeni in Saudi Arabia
Authors: Mohammad Zafar-ul Islam
Abstract:
The breeding programme of Houbara bustard was started in Saudi Arabia in 1986 to undertake the restoration of native species such as Houbara through a programme of re-introduction, involving the release of captive-bred birds in the wild. Two sites were selected for houbara re-introduction, i.e., Mahazat as-Sayd and Saja Umm Ar-Rimth protected areas in 1988 and 1998 respectively. Both the areas are fenced fairly level, sandy plain with a few rock outcrops. Captive bred houbara have been released in Mahazat since 1992 by NWRC and those birds have been successfully breeding since then. The nesting season of the houbara at Mahazat recorded from February to May and on an average 20-25 nests are located each year but no nesting recorded in Saja. Houbara are monitored using radio transmitters through aerial tracking technique and also a vehicle for terrestrial tracking. Total population of houbara in Mahazat is roughly estimated around 300-400 birds, using the following: N = n1+n2+n3+n4+n5 (n1 = released or wild-born, radio, regularly monitored/checked; n2 = radio tagged missing; n3 = wild born chicks not recorded; n4 = wild born chicks, recorded but not tagged; n5 = immigrants). However, in Saja only 4-7 individuals of houbara have been survived since 2001 because most of the birds are predated immediately after the release. The mean annual home was also calculated using Kernel and Convex polygons methods with Range VII software. The minimum density of houbara was also calculated. In order to know the houbara movement or their migration to other regions, two captive-reared male houbara that were released into the wild and one wild born female were fitted with Platform Transmitter Terminals (PTT). The home range shows that wild-born female has larger movement than two males. More areas need to be selected for reintroduction programme to establish the network of sites to provide easy access to move these birds and mingle with the wild houbara. Some potential sites have been proposed which require more surveys to check the habitat suitability.Keywords: re-introduction, survival rate, home range, Saudi Arabia
Procedia PDF Downloads 41824122 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study
Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali
Abstract:
Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin
Procedia PDF Downloads 19424121 Effect of Specimen Thickness on Probability Distribution of Grown Crack Size in Magnesium Alloys
Authors: Seon Soon Choi
Abstract:
The fatigue crack growth is stochastic because of the fatigue behavior having an uncertainty and a randomness. Therefore, it is necessary to determine the probability distribution of a grown crack size at a specific fatigue crack propagation life for maintenance of structure as well as reliability estimation. The essential purpose of this study is to present the good probability distribution fit for the grown crack size at a specified fatigue life in a rolled magnesium alloy under different specimen thickness conditions. Fatigue crack propagation experiments are carried out in laboratory air under three conditions of specimen thickness using AZ31 to investigate a stochastic crack growth behavior. The goodness-of-fit test for probability distribution of a grown crack size under different specimen thickness conditions is performed by Anderson-Darling test. The effect of a specimen thickness on variability of a grown crack size is also investigated.Keywords: crack size, fatigue crack propagation, magnesium alloys, probability distribution, specimen thickness
Procedia PDF Downloads 50024120 Dissecting Big Trajectory Data to Analyse Road Network Travel Efficiency
Authors: Rania Alshikhe, Vinita Jindal
Abstract:
Digital innovation has played a crucial role in managing smart transportation. For this, big trajectory data collected from traveling vehicles, such as taxis through installed global positioning system (GPS)-enabled devices can be utilized. It offers an unprecedented opportunity to trace the movements of vehicles in fine spatiotemporal granularity. This paper aims to explore big trajectory data to measure the travel efficiency of road networks using the proposed statistical travel efficiency measure (STEM) across an entire city. Further, it identifies the cause of low travel efficiency by proposed least square approximation network-based causality exploration (LANCE). Finally, the resulting data analysis reveals the causes of low travel efficiency, along with the road segments that need to be optimized to improve the traffic conditions and thus minimize the average travel time from given point A to point B in the road network. Obtained results show that our proposed approach outperforms the baseline algorithms for measuring the travel efficiency of the road network.Keywords: GPS trajectory, road network, taxi trips, digital map, big data, STEM, LANCE
Procedia PDF Downloads 15824119 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images
Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang
Abstract:
Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network
Procedia PDF Downloads 9624118 Mitigating Supply Chain Risk for Sustainability Using Big Data Knowledge: Evidence from the Manufacturing Supply Chain
Authors: Mani Venkatesh, Catarina Delgado, Purvishkumar Patel
Abstract:
The sustainable supply chain is gaining popularity among practitioners because of increased environmental degradation and stakeholder awareness. On the other hand supply chain, risk management is very crucial for the practitioners as it potentially disrupts supply chain operations. Prediction and addressing the risk caused by social issues in the supply chain is paramount importance to the sustainable enterprise. More recently, the usage of Big data analytics for forecasting business trends has been gaining momentum among professionals. The aim of the research is to explore the application of big data, predictive analytics in successfully mitigating supply chain social risk and demonstrate how such mitigation can help in achieving sustainability (environmental, economic & social). The method involves the identification and validation of social issues in the supply chain by an expert panel and survey. Later, we used a case study to illustrate the application of big data in the successful identification and mitigation of social issues in the supply chain. Our result shows that the company can predict various social issues through big data, predictive analytics and mitigate the social risk. We also discuss the implication of this research to the body of knowledge and practice.Keywords: big data, sustainability, supply chain social sustainability, social risk, case study
Procedia PDF Downloads 41024117 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach
Authors: Elias K. Maragos, Petros E. Maravelakis
Abstract:
In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.Keywords: Dynamic Data Envelopment Analysis, DDEA, piecewise linear inputs, piecewise linear outputs
Procedia PDF Downloads 16224116 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis
Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu
Abstract:
In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.Keywords: supervised, functional principal component analysis, functional response, functional linear regression
Procedia PDF Downloads 7824115 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects
Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim
Abstract:
Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation
Procedia PDF Downloads 4524114 Toxicity of PPCPs on Adapted Sludge Community
Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton
Abstract:
Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.Keywords: adapted sludge community, mixture, PPCPs, toxicity
Procedia PDF Downloads 40124113 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data
Authors: N. Borjalilu, P. Rabiei, A. Enjoo
Abstract:
Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety
Procedia PDF Downloads 16824112 From Modeling of Data Structures towards Automatic Programs Generating
Authors: Valentin P. Velikov
Abstract:
Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.Keywords: computer science, graphical user interface, user dialog interface, dialog frames, data modeling, subject area modeling
Procedia PDF Downloads 30624111 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features
Authors: Bo Wang
Abstract:
The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection
Procedia PDF Downloads 28624110 Consortium Blockchain-based Model for Data Management Applications in the Healthcare Sector
Authors: Teo Hao Jing, Shane Ho Ken Wae, Lee Jin Yu, Burra Venkata Durga Kumar
Abstract:
Current distributed healthcare systems face the challenge of interoperability of health data. Storing electronic health records (EHR) in local databases causes them to be fragmented. This problem is aggravated as patients visit multiple healthcare providers in their lifetime. Existing solutions are unable to solve this issue and have caused burdens to healthcare specialists and patients alike. Blockchain technology was found to be able to increase the interoperability of health data by implementing digital access rules, enabling uniformed patient identity, and providing data aggregation. Consortium blockchain was found to have high read throughputs, is more trustworthy, more secure against external disruptions and accommodates transactions without fees. Therefore, this paper proposes a blockchain-based model for data management applications. In this model, a consortium blockchain is implemented by using a delegated proof of stake (DPoS) as its consensus mechanism. This blockchain allows collaboration between users from different organizations such as hospitals and medical bureaus. Patients serve as the owner of their information, where users from other parties require authorization from the patient to view their information. Hospitals upload the hash value of patients’ generated data to the blockchain, whereas the encrypted information is stored in a distributed cloud storage.Keywords: blockchain technology, data management applications, healthcare, interoperability, delegated proof of stake
Procedia PDF Downloads 13924109 System Security Impact on the Dynamic Characteristics of Measurement Sensors in Smart Grids
Authors: Yiyang Su, Jörg Neumann, Jan Wetzlich, Florian Thiel
Abstract:
Smart grid is a term used to describe the next generation power grid. New challenges such as integration of renewable and decentralized energy sources, the requirement for continuous grid estimation and optimization, as well as the use of two-way flows of energy have been brought to the power gird. In order to achieve efficient, reliable, sustainable, as well as secure delivery of electric power more and more information and communication technologies are used for the monitoring and the control of power grids. Consequently, the need for cybersecurity is dramatically increased and has converged into several standards which will be presented here. These standards for the smart grid must be designed to satisfy both performance and reliability requirements. An in depth investigation of the effect of retrospectively embedded security in existing grids on it’s dynamic behavior is required. Therefore, a retrofitting plan for existing meters is offered, and it’s performance in a test low voltage microgrid is investigated. As a result of this, integration of security measures into measurement architectures of smart grids at the design phase is strongly recommended.Keywords: cyber security, performance, protocols, security standards, smart grid
Procedia PDF Downloads 32624108 Finding the Free Stream Velocity Using Flow Generated Sound
Authors: Saeed Hosseini, Ali Reza Tahavvor
Abstract:
Sound processing is one the subjects that newly attracts a lot of researchers. It is efficient and usually less expensive than other methods. In this paper the flow generated sound is used to estimate the flow speed of free flows. Many sound samples are gathered. After analyzing the data, a parameter named wave power is chosen. For all samples, the wave power is calculated and averaged for each flow speed. A curve is fitted to the averaged data and a correlation between the wave power and flow speed is founded. Test data are used to validate the method and errors for all test data were under 10 percent. The speed of the flow can be estimated by calculating the wave power of the flow generated sound and using the proposed correlation.Keywords: the flow generated sound, free stream, sound processing, speed, wave power
Procedia PDF Downloads 41824107 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 28524106 Efficiency of DMUs in Presence of New Inputs and Outputs in DEA
Authors: Esmat Noroozi, Elahe Sarfi, Farha Hosseinzadeh Lotfi
Abstract:
Examining the impacts of data modification is considered as sensitivity analysis. A lot of studies have considered the data modification of inputs and outputs in DEA. The issues which has not heretofore been considered in DEA sensitivity analysis is modification in the number of inputs and (or) outputs and determining the impacts of this modification in the status of efficiency of DMUs. This paper is going to present systems that show the impacts of adding one or multiple inputs or outputs on the status of efficiency of DMUs and furthermore a model is presented for recognizing the minimum number of inputs and (or) outputs from among specified inputs and outputs which can be added whereas an inefficient DMU will become efficient. Finally the presented systems and model have been utilized for a set of real data and the results have been reported.Keywords: data envelopment analysis, efficiency, sensitivity analysis, input, out put
Procedia PDF Downloads 45024105 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.Keywords: SQL injection, attacks, web application, accuracy, database
Procedia PDF Downloads 15324104 Mechanical Properties and Microstructural Analysis of Al6061-Red Mud Composites
Authors: M. Gangadharappa, M. Ravi Kumar, H. N. Reddappa
Abstract:
The mechanical properties and morphological analysis of Al6061-Red mud particulate composites were investigated. The compositions of the composite include a matrix of Al6061 and the red mud particles of 53-75 micron size as reinforcement ranging from 0% to 12% at an interval of 2%. Stir casting technique was used to fabricate Al6061-Red mud composites. Density measurement, estimation of percentage porosity, tensile properties, fracture toughness, hardness value, impact energy, percentage elongation and percentage reduction in area. Further, the microstructures and SEM examinations were investigated to characterize the composites produced. The result shows that a uniform dispersion of the red mud particles along the grain boundaries of the Al6061 alloy. The tensile strength and hardness values increases with the addition of Red mud particles, but there is a slight decrease in the impact energy values, values of percentage elongation and percentage reduction in area as the reinforcement increases. From these results of investigation, we concluded that the red mud, an industrial waste can be used to enhance the properties of Al6061 alloy for engineering applications.Keywords: Al6061, red mud, tensile strength, hardness and microstructures
Procedia PDF Downloads 56424103 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications
Procedia PDF Downloads 9424102 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State
Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing
Abstract:
Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch
Procedia PDF Downloads 16724101 Just Not Seeing It: Exploring the Relationship between Inattention Blindness and Banner Blindness
Authors: Carie Cunningham, Krsiten Lynch
Abstract:
Despite a viewer’s thought that they may be paying attention, many times they are missing out on their surrounds-- a phenomenon referred to as inattentional blindness. Inattention blindness refers to the failure of an individual to orient their attention to a particular item in their visual field. This well-defined in the psychology literature. Similarly, this phenomenon has been evaluated in media types in advertising. In advertising, not comprehending/remembering items in one’s field of vision is known as banner blindness. On the other hand, banner blindness is a phenomenon that occurs when individuals habitually see a banner in a specific area on a webpage, and thus condition themselves to ignore those habitual areas. Another reason that individuals avoid these habitual areas (usually on the top or sides of a webpage) is due to the lack of personal relevance or pertinent information to the viewer. Banner blindness, while a web-based concept, may also relate this inattention blindness. This paper is proposing an analysis of the true similarities and differences between these concepts bridging the two dimensions of thinking together. Forty participants participated in an eye-tracking and post-survey experiment to test attention and memory measures in both a banner blindness and inattention blindness condition. The two conditions were conducted between subjects semi-randomized order. Half of participants were told to search through the content ignoring the advertising banners; the other half of participants were first told to search through the content ignoring the distractor icon. These groups were switched after 5 trials and then 5 more trials were completed. In review of the literature, sustainability communication was found to have many inconsistencies with message production and viewer awareness. For the purpose of this study, we used advertising materials as stimuli. Results suggest that there are gaps between the two concepts and that more research should be done testing these effects in a real world setting versus an online environment. This contributes to theory by exploring the overlapping concepts—inattention blindness and banner blindness and providing the advertising industry with support that viewers can still fall victim to ignoring items in their field of view even if not consciously, which will impact message development.Keywords: attention, banner blindness, eye movement, inattention blindness
Procedia PDF Downloads 27624100 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea
Authors: Kyomin Lee, Joohee Kim, Sangho Kang
Abstract:
The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste
Procedia PDF Downloads 21024099 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 44824098 Method for Evaluating the Monetary Value of a Customized Version of the Digital Twin for the Additive Manufacturing
Authors: Fabio Oettl, Sebastian Hoerbrand, Tobias Wittmeir, Johannes Schilp
Abstract:
By combining the additive manufacturing (AM)- process with digital concepts, like the digital twin (DT) or the downsized and basing concept of the digital part file (DPF), the competitiveness of additive manufacturing is enhanced and new use cases like decentral production are enabled. But in literature, one can´t find any quantitative approach for valuing the usage of a DT or DPF in AM. Out of this fact, such an approach will be developed within this paper in order to further promote or dissuade the usage of these concepts. The focus is set on the production as an early lifecycle phase, which means that the AM-production process gets analyzed regarding the potential advantages of using DPF in AM. These advantages are transferred to a monetary value with this approach. By calculating the costs of the DPF, an overall monetary value is a result. Thereon a tool, based on a simulation environment is constructed, where the algorithms are transformed into a program. The results of applying this tool show that an overall value of 20,81 € for the DPF can be realized for one special use case. For the future application of the DPF there is the recommendation to integrate especially sustainable information because out of this, a higher value of the DPF can be expected.Keywords: additive manufacturing, digital concept costs, digital part file, digital twin, monetary value estimation
Procedia PDF Downloads 204