Search results for: data mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25757

Search results for: data mapping

24227 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: transportation networks, freight delivery, data flow, monitoring, e-services

Procedia PDF Downloads 126
24226 Regulating Issues concerning Data Protection in Cloud Computing: Developing a Saudi Approach

Authors: Jumana Majdi Qutub

Abstract:

Rationale: Cloud computing has rapidly developed the past few years. Because of the importance of providing protection for personal data used in cloud computing, the role of data protection in promoting trust and confidence in users’ data has become an important policy priority. This research examines key regulatory challenges rose by the growing use and importance of cloud computing with focusing on protection of individuals personal data. Methodology: Describing and analyzing governance challenges facing policymakers and industry in Saudi Arabia, with an account of anticipated governance responses. The aim of the research is to describe and define the regulatory challenges on cloud computing for policy making in Saudi Arabia and comparing it with potential complied issues rose in respect of transported data to EU member state. In addition, it discusses information privacy issues. Finally, the research proposes policy recommendation that would resolve concerns surrounds the privacy and effectiveness of clouds computing frameworks for data protection. Results: There are still no clear regulation in Saudi Arabia specialized in legalizing cloud computing and specialty regulations in transferring data internationally and locally. Decision makers need to review the applicable law in Saudi Arabia that protect information in cloud computing. This should be from an international and a local view in order to identify all requirements surrounding this area. It is important to educate cloud computing users about their information value and rights before putting it in the cloud to avoid further legal complications, such as making an educational program to prevent giving personal information to a bank employee. Therefore, with many kinds of cloud computing services, it is important to have it covered by the law in all aspects.

Keywords: cloud computing, cyber crime, data protection, privacy

Procedia PDF Downloads 262
24225 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, CCR Model, network data envelopment analysis, Iran electricity power chain

Procedia PDF Downloads 165
24224 Cloud Shield: Model to Secure User Data While Using Content Delivery Network Services

Authors: Rachna Jain, Sushila Madan, Bindu Garg

Abstract:

Cloud computing is the key powerhouse in numerous organizations due to shifting of their data to the cloud environment. In recent years it has been observed that cloud-based-services are being used on large scale for content storage, distribution and processing. Various issues have been observed in cloud computing environment that need to be addressed. Security and privacy are found topmost concern area. In this paper, a novel security model is proposed to secure data by utilizing CDN services like image to icon conversion. CDN Service is a content delivery service which converts an image to icon, word to pdf & Latex to pdf etc. Presented model is used to convert an image into icon by keeping image secret. Here security of image is imparted so that image should be encrypted and decrypted by data owners only. It is also discussed in the paper that how server performs multiplication and selection on encrypted data without decryption. The data can be image file, word file, audio or video file. Moreover, the proposed model is capable enough to multiply images, encrypt them and send to a server application for conversion. Eventually, the prime objective is to encrypt an image and convert the encrypted image to image Icon by utilizing homomorphic encryption.

Keywords: cloud computing, user data security, homomorphic encryption, image multiplication, CDN service

Procedia PDF Downloads 334
24223 Data Mining Approach: Classification Model Evaluation

Authors: Lubabatu Sada Sodangi

Abstract:

The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.

Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset

Procedia PDF Downloads 378
24222 Cassava Plant Architecture: Insights from Genome-Wide Association Studies

Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi

Abstract:

Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.

Keywords: Manihot esculenta Crantz, plant architecture, DArtseq, SNP markers, genome-wide association study

Procedia PDF Downloads 70
24221 Developing an Information Model of Manufacturing Process for Sustainability

Authors: Jae Hyun Lee

Abstract:

Manufacturing companies use life-cycle inventory databases to analyze sustainability of their manufacturing processes. Life cycle inventory data provides reference data which may not be accurate for a specific company. Collecting accurate data of manufacturing processes for a specific company requires enormous time and efforts. An information model of typical manufacturing processes can reduce time and efforts to get appropriate reference data for a specific company. This paper shows an attempt to build an abstract information model which can be used to develop information models for specific manufacturing processes.

Keywords: process information model, sustainability, OWL, manufacturing

Procedia PDF Downloads 430
24220 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness

Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers

Abstract:

The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).

Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning

Procedia PDF Downloads 286
24219 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery

Authors: Bencherif Kada

Abstract:

In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, diversity, shrublands

Procedia PDF Downloads 124
24218 Dissecting Big Trajectory Data to Analyse Road Network Travel Efficiency

Authors: Rania Alshikhe, Vinita Jindal

Abstract:

Digital innovation has played a crucial role in managing smart transportation. For this, big trajectory data collected from traveling vehicles, such as taxis through installed global positioning system (GPS)-enabled devices can be utilized. It offers an unprecedented opportunity to trace the movements of vehicles in fine spatiotemporal granularity. This paper aims to explore big trajectory data to measure the travel efficiency of road networks using the proposed statistical travel efficiency measure (STEM) across an entire city. Further, it identifies the cause of low travel efficiency by proposed least square approximation network-based causality exploration (LANCE). Finally, the resulting data analysis reveals the causes of low travel efficiency, along with the road segments that need to be optimized to improve the traffic conditions and thus minimize the average travel time from given point A to point B in the road network. Obtained results show that our proposed approach outperforms the baseline algorithms for measuring the travel efficiency of the road network.

Keywords: GPS trajectory, road network, taxi trips, digital map, big data, STEM, LANCE

Procedia PDF Downloads 157
24217 A Bayesian Hierarchical Poisson Model with an Underlying Cluster Structure for the Analysis of Measles in Colombia

Authors: Ana Corberan-Vallet, Karen C. Florez, Ingrid C. Marino, Jose D. Bermudez

Abstract:

In 2016, the Region of the Americas was declared free of measles, a viral disease that can cause severe health problems. However, since 2017, measles has reemerged in Venezuela and has subsequently reached neighboring countries. In 2018, twelve American countries reported confirmed cases of measles. Governmental and health authorities in Colombia, a country that shares the longest land boundary with Venezuela, are aware of the need for a strong response to restrict the expanse of the epidemic. In this work, we apply a Bayesian hierarchical Poisson model with an underlying cluster structure to describe disease incidence in Colombia. Concretely, the proposed methodology provides relative risk estimates at the department level and identifies clusters of disease, which facilitates the implementation of targeted public health interventions. Socio-demographic factors, such as the percentage of migrants, gross domestic product, and entry routes, are included in the model to better describe the incidence of disease. Since the model does not impose any spatial correlation at any level of the model hierarchy, it avoids the spatial confounding problem and provides a suitable framework to estimate the fixed-effect coefficients associated with spatially-structured covariates.

Keywords: Bayesian analysis, cluster identification, disease mapping, risk estimation

Procedia PDF Downloads 151
24216 Synthesis of Amorphous Nanosilica Anode Material from Philippine Waste Rice Hull for Lithium Battery Application

Authors: Emie A. Salamangkit-Mirasol, Rinlee Butch M. Cervera

Abstract:

Rice hull or rice husk (RH) is an agricultural waste obtained from milling rice grains. Since RH has no commercial value and is difficult to use in agriculture, its volume is often reduced through open field burning which is an environmental hazard. In this study, amorphous nanosilica from Philippine waste RH was prepared via acid precipitation method. The synthesized samples were fully characterized for its microstructural properties. X-ray diffraction pattern reveals that the structure of the prepared sample is amorphous in nature while Fourier transform infrared spectrum showed the different vibration bands of the synthesized sample. Scanning electron microscopy (SEM) and particle size analysis (PSA) confirmed the presence of agglomerated silica particles. On the other hand, transmission electron microscopy (TEM) revealed an amorphous sample with grain sizes of about 5 to 20 nanometer range and has about 95 % purity according to EDS analyses. The elemental mapping also suggests that leaching of rice hull ash effectively removed the metallic impurity such as potassium element in the material. Hence, amorphous nanosilica was successfully prepared via a low-cost acid precipitation method from Philippine waste rice hull. In addition, initial electrode performance of the synthesized samples as an anode material in Lithium Battery have been investigated.

Keywords: agricultural waste, anode material, nanosilica, rice hull

Procedia PDF Downloads 283
24215 Mitigating Supply Chain Risk for Sustainability Using Big Data Knowledge: Evidence from the Manufacturing Supply Chain

Authors: Mani Venkatesh, Catarina Delgado, Purvishkumar Patel

Abstract:

The sustainable supply chain is gaining popularity among practitioners because of increased environmental degradation and stakeholder awareness. On the other hand supply chain, risk management is very crucial for the practitioners as it potentially disrupts supply chain operations. Prediction and addressing the risk caused by social issues in the supply chain is paramount importance to the sustainable enterprise. More recently, the usage of Big data analytics for forecasting business trends has been gaining momentum among professionals. The aim of the research is to explore the application of big data, predictive analytics in successfully mitigating supply chain social risk and demonstrate how such mitigation can help in achieving sustainability (environmental, economic & social). The method involves the identification and validation of social issues in the supply chain by an expert panel and survey. Later, we used a case study to illustrate the application of big data in the successful identification and mitigation of social issues in the supply chain. Our result shows that the company can predict various social issues through big data, predictive analytics and mitigate the social risk. We also discuss the implication of this research to the body of knowledge and practice.

Keywords: big data, sustainability, supply chain social sustainability, social risk, case study

Procedia PDF Downloads 408
24214 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach

Authors: Elias K. Maragos, Petros E. Maravelakis

Abstract:

In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.

Keywords: Dynamic Data Envelopment Analysis, DDEA, piecewise linear inputs, piecewise linear outputs

Procedia PDF Downloads 161
24213 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects

Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim

Abstract:

Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.

Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation

Procedia PDF Downloads 42
24212 Mapping of Research Productivity of Balochistan University Faculty: A Bibliometric Study of Pakistan Studies Bilingual/Bi-annual Pakistan Studies, English/Urdu Research Journal from 2015 to 2020

Authors: Muhammad Anwar

Abstract:

The prime objective of the study is to investigate the research productivity of the PAKISTAN STUDIES Bilingual / Bi-annual Pakistan Studies, English / Urdu Research Journal from 2015 to 2020. The present study also finds the frequency of publications, author contributions; paper length, references and most productive authors and degree of collaboration also have been checked. The current study finds 271 research articles have been contributed by faculty members of university of Balochistan, Quetta. The highest number of papers has been published 75(27.67%) in 2020 and 59(21.77%) papers were published in 2019. The current study finds the vol.10 and vol.11 were Contributed 36(13.28%) and 45(16.00%) research articles respectively. This present study recognizes those 179(66.05%) two authors and 62(22.87%) authors were counted in three. The results revealed the degree of collaboration was 0.97. The study further discloses the length of the paper where the majority of the 122(45.07%) papers were range of 11-15 and 73(26.93%) articles were range of 6-10. The utmost prolfic author was Dr.Noor Ahmed from the Pakistan study center with 15 papers ranked 1st and Dr.Kaleem Bareach contributed 14 articles ranked 2nd.

Keywords: research, bibliometric, bilingual, bi-annual, Pakistan, university, Balochistan

Procedia PDF Downloads 102
24211 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data

Authors: N. Borjalilu, P. Rabiei, A. Enjoo

Abstract:

Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.

Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety

Procedia PDF Downloads 168
24210 From Modeling of Data Structures towards Automatic Programs Generating

Authors: Valentin P. Velikov

Abstract:

Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.

Keywords: computer science, graphical user interface, user dialog interface, dialog frames, data modeling, subject area modeling

Procedia PDF Downloads 306
24209 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 284
24208 Consortium Blockchain-based Model for Data Management Applications in the Healthcare Sector

Authors: Teo Hao Jing, Shane Ho Ken Wae, Lee Jin Yu, Burra Venkata Durga Kumar

Abstract:

Current distributed healthcare systems face the challenge of interoperability of health data. Storing electronic health records (EHR) in local databases causes them to be fragmented. This problem is aggravated as patients visit multiple healthcare providers in their lifetime. Existing solutions are unable to solve this issue and have caused burdens to healthcare specialists and patients alike. Blockchain technology was found to be able to increase the interoperability of health data by implementing digital access rules, enabling uniformed patient identity, and providing data aggregation. Consortium blockchain was found to have high read throughputs, is more trustworthy, more secure against external disruptions and accommodates transactions without fees. Therefore, this paper proposes a blockchain-based model for data management applications. In this model, a consortium blockchain is implemented by using a delegated proof of stake (DPoS) as its consensus mechanism. This blockchain allows collaboration between users from different organizations such as hospitals and medical bureaus. Patients serve as the owner of their information, where users from other parties require authorization from the patient to view their information. Hospitals upload the hash value of patients’ generated data to the blockchain, whereas the encrypted information is stored in a distributed cloud storage.

Keywords: blockchain technology, data management applications, healthcare, interoperability, delegated proof of stake

Procedia PDF Downloads 138
24207 Thermoplastic Polyurethane/Barium Titanate Composites

Authors: Seyfullah Madakbaş, Ferhat Şen, Memet Vezir Kahraman

Abstract:

The aim of this study was to improve thermal stability, mechanical and surface properties of thermoplastic polyurethane (TPU) with the addition of BaTiO3. The TPU/ BaTiO3 composites having various ratios of TPU and BaTiO3 were prepared. The chemical structure of the prepared composites was investigated by FT-IR. FT-IR spectra of TPU/ barium titanate composites show that they successfully were prepared. Thermal stability of the samples was evaluated by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). The prepared composites showed high thermal stability, and the char yield increased as barium titanate content increased. The glass transition temperatures of the composites rise with the addition of barium titanate. Mechanical properties of the samples were characterized with stress-strain test. The mechanical properties of the TPU were increased with the contribution of the contribution of the barium titanate it increased. Hydrophobicity of the samples was determined by the contact angle measurements. The contact angles have the tendency to increase the hydrophobic behavior on the surface, when barium titanate was added into TPU. Moreover, the surface morphology of the samples was investigated by a scanning electron microscopy (SEM). SEM-EDS mapping images showed that barium titanate particles were dispersed homogeneously. Finally, the obtained results prove that the prepared composites have good thermal, mechanical and surface properties and that they can be used in many applications such as the electronic devices, materials engineering and other emergent.

Keywords: barium titanate, composites, thermoplastic polyurethane, scanning electron microscopy

Procedia PDF Downloads 329
24206 Mapping the Ties That Bind: Corruption, Political Alienation and Culture of Corruption

Authors: Mabrouka Immhemd Al-Werfalli

Abstract:

How are political alienation and corruption related? What is the nature of relationship linking corruption and political alienation? When citizens withdraw their loyalty from their political regime and leaders, they highlight their alienation from them. The link between corruption and political alienation is that the individual would intentionally involve in corruption particularly when a state of lawlessness prevails. This paper represents a challenge- how to gauge a link between political alienation culture of corruption and corruption. It aims to highlight the political alienation related factors that determine the levels of corruption in Libya. One of the most prominent reasons for the Libyan uprising in February 2011 was the pervasiveness of corruption. Corruption in Libya remained a significant problem despite a robust anti-corruption discourse and harsh legislation undertaken by the previous regime. The long-standing political corruption in Libya has offered ample opportunity for the evolution of a structure of negative values and morals. This has formed what is termed as a ‘culture of corruption’, which has induced people to accept and justify corrupt behavior. The paper is a part of a study concerns the phenomenon of political alienation in Libya which was based on a survey conducted in 2001 in the city of Benghazi. The finding shows that abuse of power, embezzlement and misuse of public funds for personal enrichment was thought to be rife within public bodies, institutions, companies, factories, banks and enterprises owned entirely or partially by the state.

Keywords: Libya, abuse of power, anti-corruption, corruption, culture of corruption, embezzlement, participation in corruption, political alienation

Procedia PDF Downloads 313
24205 Flood Hazard and Risk Mapping to Assess Ice-Jam Flood Mitigation Measures

Authors: Karl-Erich Lindenschmidt, Apurba Das, Joel Trudell, Keanne Russell

Abstract:

In this presentation, we explore options for mitigating ice-jam flooding along the Athabasca River in western Canada. Not only flood hazard, expressed in this case as the probability of flood depths and extents being exceeded, but also flood risk, in which annual expected damages are calculated. Flood risk is calculated, which allows a cost-benefit analysis to be made so that decisions on the best mitigation options are not based solely on flood hazard but also on the costs related to flood damages and the benefits of mitigation. The river ice model is used to simulate extreme ice-jam flood events with which scenarios are run to determine flood exposure and damages in flood-prone areas along the river. We will concentrate on three mitigation options – the placement of a dike, artificial breakage of the ice cover along the river, the installation of an ice-control structure, and the construction of a reservoir. However, any mitigation option is not totally failsafe. For example, dikes can still be overtopped and breached, and ice jams may still occur in areas of the river where ice covers have been artificially broken up. Hence, for all options, it is recommended that zoning of building developments away from greater flood hazard areas be upheld. Flood mitigation can have a negative effect of giving inhabitants a false sense of security that flooding may not happen again, leading to zoning policies being relaxed. (Text adapted from Lindenschmidt [2022] "Ice Destabilization Study - Phase 2", submitted to the Regional Municipality of Wood Buffalo, Alberta, Canada)

Keywords: ice jam, flood hazard, flood risk river ice modelling, flood risk

Procedia PDF Downloads 185
24204 Finding the Free Stream Velocity Using Flow Generated Sound

Authors: Saeed Hosseini, Ali Reza Tahavvor

Abstract:

Sound processing is one the subjects that newly attracts a lot of researchers. It is efficient and usually less expensive than other methods. In this paper the flow generated sound is used to estimate the flow speed of free flows. Many sound samples are gathered. After analyzing the data, a parameter named wave power is chosen. For all samples, the wave power is calculated and averaged for each flow speed. A curve is fitted to the averaged data and a correlation between the wave power and flow speed is founded. Test data are used to validate the method and errors for all test data were under 10 percent. The speed of the flow can be estimated by calculating the wave power of the flow generated sound and using the proposed correlation.

Keywords: the flow generated sound, free stream, sound processing, speed, wave power

Procedia PDF Downloads 415
24203 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 104
24202 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 283
24201 Efficiency of DMUs in Presence of New Inputs and Outputs in DEA

Authors: Esmat Noroozi, Elahe Sarfi, Farha Hosseinzadeh Lotfi

Abstract:

Examining the impacts of data modification is considered as sensitivity analysis. A lot of studies have considered the data modification of inputs and outputs in DEA. The issues which has not heretofore been considered in DEA sensitivity analysis is modification in the number of inputs and (or) outputs and determining the impacts of this modification in the status of efficiency of DMUs. This paper is going to present systems that show the impacts of adding one or multiple inputs or outputs on the status of efficiency of DMUs and furthermore a model is presented for recognizing the minimum number of inputs and (or) outputs from among specified inputs and outputs which can be added whereas an inefficient DMU will become efficient. Finally the presented systems and model have been utilized for a set of real data and the results have been reported.

Keywords: data envelopment analysis, efficiency, sensitivity analysis, input, out put

Procedia PDF Downloads 450
24200 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 131
24199 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time

Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl

Abstract:

In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.

Keywords: SQL injection, attacks, web application, accuracy, database

Procedia PDF Downloads 151
24198 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 93