Search results for: forensic autopsy data
25002 Resource Framework Descriptors for Interestingness in Data
Authors: C. B. Abhilash, Kavi Mahesh
Abstract:
Human beings are the most advanced species on earth; it's all because of the ability to communicate and share information via human language. In today's world, a huge amount of data is available on the web in text format. This has also resulted in the generation of big data in structured and unstructured formats. In general, the data is in the textual form, which is highly unstructured. To get insights and actionable content from this data, we need to incorporate the concepts of text mining and natural language processing. In our study, we mainly focus on Interesting data through which interesting facts are generated for the knowledge base. The approach is to derive the analytics from the text via the application of natural language processing. Using semantic web Resource framework descriptors (RDF), we generate the triple from the given data and derive the interesting patterns. The methodology also illustrates data integration using the RDF for reliable, interesting patterns.Keywords: RDF, interestingness, knowledge base, semantic data
Procedia PDF Downloads 16225001 Data Mining Practices: Practical Studies on the Telecommunication Companies in Jordan
Authors: Dina Ahmad Alkhodary
Abstract:
This study aimed to investigate the practices of Data Mining on the telecommunication companies in Jordan, from the viewpoint of the respondents. In order to achieve the goal of the study, and test the validity of hypotheses, the researcher has designed a questionnaire to collect data from managers and staff members from main department in the researched companies. The results shows improvements stages of the telecommunications companies towered Data Mining.Keywords: data, mining, development, business
Procedia PDF Downloads 49825000 The Translational Fandom of Marvel Cinematic Universe in the Outlier of Chinese Television Culture
Authors: Xiao Yao
Abstract:
The escalating tech innovation in new media culture is liberating audiences from passive consumption to more productive and critical engagement with the legacy and streaming television media. However, how fan translation is furthering the reception and interpretation of global screen stories remains the outlier of television studies. This paper will showcase the fan-based cross-cultural engagement with the Marvel Cinematic Universe (MCU) in China. This is to highlight: 1) the ways marginal audiences (Chinese MCU fans) seek to sync with the recent telecinematic expansion of MCU; 2) the forensic and interpretative works done by Chinese MCU fans who persistently seek to amplify the pleasure of MCU content in their media contexts; 3) the crucial but largely unacknowledged cultural value generated by Chinese MCU fandom in the outlier of contemporary Chinese TV culture. Taken together, this study aims to further explore the notion of “translational fandom” and integrate its theorisation into the present research in television culture.Keywords: Chinese MCU fans, cross-cultural engagement, Loki, television media, translational fandom
Procedia PDF Downloads 12924999 The Impact of System and Data Quality on Organizational Success in the Kingdom of Bahrain
Authors: Amal M. Alrayes
Abstract:
Data and system quality play a central role in organizational success, and the quality of any existing information system has a major influence on the effectiveness of overall system performance.Given the importance of system and data quality to an organization, it is relevant to highlight their importance on organizational performance in the Kingdom of Bahrain. This research aims to discover whether system quality and data quality are related, and to study the impact of system and data quality on organizational success. A theoretical model based on previous research is used to show the relationship between data and system quality, and organizational impact. We hypothesize, first, that system quality is positively associated with organizational impact, secondly that system quality is positively associated with data quality, and finally that data quality is positively associated with organizational impact. A questionnaire was conducted among public and private organizations in the Kingdom of Bahrain. The results show that there is a strong association between data and system quality, that affects organizational success.Keywords: data quality, performance, system quality, Kingdom of Bahrain
Procedia PDF Downloads 49324998 Cloud Computing in Data Mining: A Technical Survey
Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham
Abstract:
Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.Keywords: cloud computing, data mining, computing models, cloud services
Procedia PDF Downloads 47924997 Cross-border Data Transfers to and from South Africa
Authors: Amy Gooden, Meshandren Naidoo
Abstract:
Genetic research and transfers of big data are not confined to a particular jurisdiction, but there is a lack of clarity regarding the legal requirements for importing and exporting such data. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa
Procedia PDF Downloads 12524996 The Study of Security Techniques on Information System for Decision Making
Authors: Tejinder Singh
Abstract:
Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data
Procedia PDF Downloads 30724995 Data Integration with Geographic Information System Tools for Rural Environmental Monitoring
Authors: Tamas Jancso, Andrea Podor, Eva Nagyne Hajnal, Peter Udvardy, Gabor Nagy, Attila Varga, Meng Qingyan
Abstract:
The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis.Keywords: remote sensing, GIS, metadata, integration, environmental analysis
Procedia PDF Downloads 12024994 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic
Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi
Abstract:
In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing
Procedia PDF Downloads 29924993 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data
Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin
Abstract:
Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.Keywords: big data, machine learning, ontology model, urban data model
Procedia PDF Downloads 41824992 Data-driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship
Procedia PDF Downloads 32924991 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39224990 Cryptographic Protocol for Secure Cloud Storage
Authors: Luvisa Kusuma, Panji Yudha Prakasa
Abstract:
Cloud storage, as a subservice of infrastructure as a service (IaaS) in Cloud Computing, is the model of nerworked storage where data can be stored in server. In this paper, we propose a secure cloud storage system consisting of two main components; client as a user who uses the cloud storage service and server who provides the cloud storage service. In this system, we propose the protocol schemes to guarantee against security attacks in the data transmission. The protocols are login protocol, upload data protocol, download protocol, and push data protocol, which implement hybrid cryptographic mechanism based on data encryption before it is sent to the cloud, so cloud storage provider does not know the user's data and cannot analysis user’s data, because there is no correspondence between data and user.Keywords: cloud storage, security, cryptographic protocol, artificial intelligence
Procedia PDF Downloads 35724989 Decentralized Data Marketplace Framework Using Blockchain-Based Smart Contract
Authors: Meshari Aljohani, Stephan Olariu, Ravi Mukkamala
Abstract:
Data is essential for enhancing the quality of life. Its value creates chances for users to profit from data sales and purchases. Users in data marketplaces, however, must share and trade data in a secure and trusted environment while maintaining their privacy. The first main contribution of this paper is to identify enabling technologies and challenges facing the development of decentralized data marketplaces. The second main contribution is to propose a decentralized data marketplace framework based on blockchain technology. The proposed framework enables sellers and buyers to transact with more confidence. Using a security deposit, the system implements a unique approach for enforcing honesty in data exchange among anonymous individuals. Before the transaction is considered complete, the system has a time frame. As a result, users can submit disputes to the arbitrators which will review them and respond with their decision. Use cases are presented to demonstrate how these technologies help data marketplaces handle issues and challenges.Keywords: blockchain, data, data marketplace, smart contract, reputation system
Procedia PDF Downloads 15824988 Optical Coherence Tomography in Parkinson’s Disease: A Potential in-vivo Retinal α-Synuclein Biomarker in Parkinson’s Disease
Authors: Jessica Chorostecki, Aashka Shah, Fen Bao, Ginny Bao, Edwin George, Navid Seraji-Bozorgzad, Veronica Gorden, Christina Caon, Elliot Frohman
Abstract:
Background: Parkinson’s Disease (PD) is a neuro degenerative disorder associated with the loss of dopaminergic cells and the presence α-synuclein (AS) aggregation in of Lewy bodies. Both dopaminergic cells and AS are found in the retina. Optical coherence tomography (OCT) allows high-resolution in-vivo examination of retinal structure injury in neuro degenerative disorders including PD. Methods: We performed a cross-section OCT study in patients with definite PD and healthy controls (HC) using Spectral Domain SD-OCT platform to measure the peripapillary retinal nerve fiber layer (pRNFL) thickness and total macular volume (TMV). We performed intra-retinal segmentation with fully automated segmentation software to measure the volume of the RNFL, ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), and the outer nuclear layer (ONL). Segmentation was performed blinded to the clinical status of the study participants. Results: 101 eyes from 52 PD patients (mean age 65.8 years) and 46 eyes from 24 HC subjects (mean age 64.1 years) were included in the study. The mean pRNFL thickness was not significantly different (96.95 μm vs 94.42 μm, p=0.07) but the TMV was significantly lower in PD compared to HC (8.33 mm3 vs 8.58 mm3 p=0.0002). Intra-retinal segmentation showed no significant difference in the RNFL volume between the PD and HC groups (0.95 mm3 vs 0.92 mm3 p=0.454). However, GCL, IPL, INL, and ONL volumes were significantly reduced in PD compared to HC. In contrast, the volume of OPL was significantly increased in PD compared to HC. Conclusions: Our finding of the enlarged OPL corresponds with mRNA expression studies showing localization of AS in the OPL across vertebrate species and autopsy studies demonstrating AS aggregation in the deeper layers of retina in PD. We propose that the enlargement of the OPL may represent a potential biomarker of AS aggregation in PD. Longitudinal studies in larger cohorts are warranted to confirm our observations that may have significant implications in disease monitoring and therapeutic development.Keywords: Optical Coherence Tomography, biomarker, Parkinson's disease, alpha-synuclein, retina
Procedia PDF Downloads 43724987 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 10824986 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods
Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan
Abstract:
Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.Keywords: forensic odontology, age estimation, North India, teeth
Procedia PDF Downloads 24224985 Visualization of Latent Sweat Fingerprints Deposit on Paper by Infrared Radiation and Blue Light
Authors: Xiaochun Huang, Xuejun Zhao, Yun Zou, Feiyu Yang, Wenbin Liu, Nan Deng, Ming Zhang, Nengbin Cai
Abstract:
A simple device termed infrared radiation (IR) was developed for rapid visualization of sweat fingerprints deposit on paper with blue light (450 nm, 11 W). In this approach, IR serves as the pretreatment device before the sweat fingerprints was illuminated by blue light. An annular blue light source was adopted for visualizing latent sweat fingerprints. Sample fingerprints were examined under various conditions after deposition, and experimental results indicate that the recovery rate of the latent sweat fingerprints is in the range of 50%-100% without chemical treatments. A mechanism for the observed visibility is proposed based on transportation and re-impregnation of fluorescer in paper at the region of water. And further exploratory experimental results gave the full support to the visible mechanism. Therefore, such a method as IR-pretreated in detecting latent fingerprints may be better for examination in the case where biological information of samples is needed for consequent testing.Keywords: forensic science, visualization, infrared radiation, blue light, latent sweat fingerprints, detection
Procedia PDF Downloads 49724984 Data Mining Approach for Commercial Data Classification and Migration in Hybrid Storage Systems
Authors: Mais Haj Qasem, Maen M. Al Assaf, Ali Rodan
Abstract:
Parallel hybrid storage systems consist of a hierarchy of different storage devices that vary in terms of data reading speed performance. As we ascend in the hierarchy, data reading speed becomes faster. Thus, migrating the application’ important data that will be accessed in the near future to the uppermost level will reduce the application I/O waiting time; hence, reducing its execution elapsed time. In this research, we implement trace-driven two-levels parallel hybrid storage system prototype that consists of HDDs and SSDs. The prototype uses data mining techniques to classify application’ data in order to determine its near future data accesses in parallel with the its on-demand request. The important data (i.e. the data that the application will access in the near future) are continuously migrated to the uppermost level of the hierarchy. Our simulation results show that our data migration approach integrated with data mining techniques reduces the application execution elapsed time when using variety of traces in at least to 22%.Keywords: hybrid storage system, data mining, recurrent neural network, support vector machine
Procedia PDF Downloads 30824983 Discussion on Big Data and One of Its Early Training Application
Authors: Fulya Gokalp Yavuz, Mark Daniel Ward
Abstract:
This study focuses on a contemporary and inevitable topic of Data Science and its exemplary application for early career building: Big Data and Leaving Learning Community (LLC). ‘Academia’ and ‘Industry’ have a common sense on the importance of Big Data. However, both of them are in a threat of missing the training on this interdisciplinary area. Some traditional teaching doctrines are far away being effective on Data Science. Practitioners needs some intuition and real-life examples how to apply new methods to data in size of terabytes. We simply explain the scope of Data Science training and exemplified its early stage application with LLC, which is a National Science Foundation (NSF) founded project under the supervision of Prof. Ward since 2014. Essentially, we aim to give some intuition for professors, researchers and practitioners to combine data science tools for comprehensive real-life examples with the guides of mentees’ feedback. As a result of discussing mentoring methods and computational challenges of Big Data, we intend to underline its potential with some more realization.Keywords: Big Data, computation, mentoring, training
Procedia PDF Downloads 36224982 Towards a Secure Storage in Cloud Computing
Authors: Mohamed Elkholy, Ahmed Elfatatry
Abstract:
Cloud computing has emerged as a flexible computing paradigm that reshaped the Information Technology map. However, cloud computing brought about a number of security challenges as a result of the physical distribution of computational resources and the limited control that users have over the physical storage. This situation raises many security challenges for data integrity and confidentiality as well as authentication and access control. This work proposes a security mechanism for data integrity that allows a data owner to be aware of any modification that takes place to his data. The data integrity mechanism is integrated with an extended Kerberos authentication that ensures authorized access control. The proposed mechanism protects data confidentiality even if data are stored on an untrusted storage. The proposed mechanism has been evaluated against different types of attacks and proved its efficiency to protect cloud data storage from different malicious attacks.Keywords: access control, data integrity, data confidentiality, Kerberos authentication, cloud security
Procedia PDF Downloads 33524981 Biodegradation Effects onto Source Identification of Diesel Fuel Contaminated Soils
Authors: Colin S. Chen, Chien-Jung Tien, Hsin-Jan Huang
Abstract:
For weathering studies, the change of chemical constituents by biodegradation effect in diesel-contaminated soils are important factors to be considered, especially when there is a prolonged period of weathering processes. The objective was to evaluate biodegradation effects onto hydrocarbon fingerprinting and distribution patterns of diesel fuels, fuel source screening and differentiation, source-specific marker compounds, and diagnostic ratios of diesel fuel constituents by laboratory and field studies. Biodegradation processes of diesel contaminated soils were evaluated by experiments lasting for 15 and 12 months, respectively. The degradation of diesel fuel in top soils was affected by organic carbon content and biomass of microorganisms in soils. Higher depletion of total petroleum hydrocarbon (TPH), n-alkanes, and polynuclear aromatic hydrocarbons (PAHs) and their alkyl homologues was observed in soils containing higher organic carbon content and biomass. Decreased ratio of selected isoprenoids (i.e., pristane (Pr) and phytane (Ph)) including n-C17/pristane and n-C18/phytane was observed. The ratio of pristane/phytane was remained consistent for a longer period of time. At the end of the experimental period, a decrease of pristane/phytane was observed. Biomarker compounds of bicyclic sesquiterpanes (BS) were less susceptible to the effects of biodegradation. The ratios of characteristic factors such as C15 sesquiterpane/ 8β(H)-drimane (BS3/BS5), C15 sesquiterpane/ 8β(H)-drimane (BS4/BS5), 8β(H)-drimane/8β(H)-homodrimane (BS5/BS10), and C15 sesquiterpane/8β(H)-homodrimane (BS3/BS10) could be adopted for source identification of diesel fuels in top soil. However, for biodegradation processes lasted for six months but shorter than nine months, only BS3/BS5 and BS3/BS10 could be distinguished in two diesel fuels. In subsoil experiments (contaminated soil located 50 cm below), the ratios of characteristic factors including BS3/BS5, BS4/BS5, and BS5/BS10 were valid for source identification of two diesel fuels for nine month biodegradation. At the early stage of contamination, biomass of soil decreased significantly. However, 6 and 7 dominant species were found in soils in top soil experiments, respectively. With less oxygen and nutrients in subsoil, less biomass of microorganisms was observed in subsoils. Only 2 and 4 diesel-degrading species of microorganisms were identified in two soils, respectively. Parameters of double ratio such as fluorene/C1-fluorene: C2-phenanthrene/C3-phenanthrene (C0F/C1F:C2P/C3P) in both top and subsoil, C2-naphthalene/C2-phenanthrene: C1-phenanthrene/C3-phenanthrene (C2N/C2P:C1P/C3P), and C1-phenanthrene/C1-fluorene: C3-naphthalene/C3-phenanthrene (C1P/C1F:C3N/C3P) in subsoil could serve as forensic indicators in diesel contaminated sites. BS3/BS10:BS4/BS5 could be used in 6 to 9 months of biodegradation processes. Results of principal component analysis (PCA) indicated that source identification of diesel fuels in top soil could only be perofrmed for weathering process less than 6 months. For subsoil, identification can be conducted for weathering process less than 9 months. Ratio of isoprenoids (pristane and phytane) and PAHs might be affected by biodegradation in spilled sites. The ratios of bicyclic sesquiterpanes could serve as forensic indicators in diesel-contaminated soils. Finally, source identification was attemped for samples collected from different fuel contaminated sites by using the unique pattern of sesquiterpanes. It was anticipated that the information generated from this study would be adopted by decision makers to evaluate the liability of cleanup in diesel contaminated sites.Keywords: biodegradation, diagnostic ratio, diesel fuel, environmental forensics
Procedia PDF Downloads 22824980 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data
Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah
Abstract:
At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.Keywords: Semantic Web, linked open data, database, statistic
Procedia PDF Downloads 17524979 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges
Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh
Abstract:
For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.Keywords: guideline, law, data protection officer, personal data
Procedia PDF Downloads 7824978 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies
Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala
Abstract:
The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies
Procedia PDF Downloads 10824977 Federated Learning in Healthcare
Authors: Ananya Gangavarapu
Abstract:
Convolutional Neural Networks (CNN) based models are providing diagnostic capabilities on par with the medical specialists in many specialty areas. However, collecting the medical data for training purposes is very challenging because of the increased regulations around data collections and privacy concerns around personal health data. The gathering of the data becomes even more difficult if the capture devices are edge-based mobile devices (like smartphones) with feeble wireless connectivity in rural/remote areas. In this paper, I would like to highlight Federated Learning approach to mitigate data privacy and security issues.Keywords: deep learning in healthcare, data privacy, federated learning, training in distributed environment
Procedia PDF Downloads 14124976 Development of Zinc Oxide Coated Carbon Nanoparticles from Pineapples Leaves Using SOL Gel Method for Optimal Adsorption of Copper ion and Reuse in Latent Fingerprint
Authors: Bienvenu Gael Fouda Mbanga, Zikhona Tywabi-Ngeva, Kriveshini Pillay
Abstract:
This work highlighted a new method for preparing Nitrogen carbon nanoparticles fused on zinc oxide nanoparticle nanocomposite (N-CNPs/ZnONPsNC) to remove copper ions (Cu²+) from wastewater by sol-gel method and applying the metal-loaded adsorbent in latent fingerprint application. The N-CNPs/ZnONPsNC showed to be an effective sorbent for optimum Cu²+ sorption at pH 8 and 0.05 g dose. The Langmuir isotherm was found to best fit the process, with a maximum adsorption capacity of 285.71 mg/g, which was higher than most values found in other research for Cu²+ removal. Adsorption was spontaneous and endothermic at 25oC. In addition, the Cu²+-N-CNPs/ZnONPsNC was found to be sensitive and selective for latent fingerprint (LFP) recognition on a range of porous surfaces. As a result, in forensic research, it is an effective distinguishing chemical for latent fingerprint detection.Keywords: latent fingerprint, nanocomposite, adsorption, copper ions, metal loaded adsorption, adsorbent
Procedia PDF Downloads 8424975 The Utilization of Big Data in Knowledge Management Creation
Authors: Daniel Brian Thompson, Subarmaniam Kannan
Abstract:
The huge weightage of knowledge in this world and within the repository of organizations has already reached immense capacity and is constantly increasing as time goes by. To accommodate these constraints, Big Data implementation and algorithms are utilized to obtain new or enhanced knowledge for decision-making. With the transition from data to knowledge provides the transformational changes which will provide tangible benefits to the individual implementing these practices. Today, various organization would derive knowledge from observations and intuitions where this information or data will be translated into best practices for knowledge acquisition, generation and sharing. Through the widespread usage of Big Data, the main intention is to provide information that has been cleaned and analyzed to nurture tangible insights for an organization to apply to their knowledge-creation practices based on facts and figures. The translation of data into knowledge will generate value for an organization to make decisive decisions to proceed with the transition of best practices. Without a strong foundation of knowledge and Big Data, businesses are not able to grow and be enhanced within the competitive environment.Keywords: big data, knowledge management, data driven, knowledge creation
Procedia PDF Downloads 11624974 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya
Authors: Masese Chuma Benard, Martin Onsiro Ronald
Abstract:
Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)
Procedia PDF Downloads 8524973 Cloud Design for Storing Large Amount of Data
Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás
Abstract:
Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization
Procedia PDF Downloads 353