Search results for: data comparison
27623 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Authors: Jaya Mathew
Abstract:
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R
Procedia PDF Downloads 37927622 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders
Authors: Sven Gehrke, Johannes Ruhland
Abstract:
Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.Keywords: trust, data mining, CRISP DM, stakeholder management
Procedia PDF Downloads 9427621 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance
Procedia PDF Downloads 49027620 Comparison of Sign Language Skill and Academic Achievement of Deaf Students in Special and Inclusive Primary Schools of South Nation Nationalities People Region, Ethiopia
Authors: Tesfaye Basha
Abstract:
The purpose of this study was to examine the sign language and academic achievement of deaf students in special and inclusive primary schools of Southern Ethiopia. The study used a mixed-method to collect varied data. The study contained Signed Amharic and English skill tasks, questionnaire, 8th-grade Primary School Leaving Certificate Examination results, classroom observation, and interviews. For quantitative (n=70) deaf students and for qualitative data collection, 16 participants were involved. The finding revealed that the limitation of sign language is a problem in signing and academic achievements. This displays that schools are not linguistically rich to enable sign language achievement for deaf students. Moreover, the finding revealed that the contribution of Total Communication in the growth of natural sign language for deaf students was unsatisfactory. The results also indicated that special schools of deaf students performed better sign language skills and academic achievement than inclusive schools. In addition, the findings revealed that high signed skill group showed higher academic achievement than the low skill group. This displayed that sign language skill is highly associated with academic achievement. In addition, to qualify deaf students in sign language and academics, teacher institutions must produce competent teachers on how to teach deaf students with sign language and literacy skills.Keywords: academic achievement, inclusive school, sign language, signed Amharic, signed English, special school, total communication
Procedia PDF Downloads 13427619 Role of Support, Experience and Education in Livelihood Resilience
Authors: Madhuri, H. R. Tewari, P. K. Bhowmick
Abstract:
The study attempts to find out the role of the community and the government support, flood experience, flood education, and education of the male-headed households in their livelihood resilience. The study is based on a randomly drawn sample of 472 households from the river basins of Ganga and Kosi in the district of Bhagalpur, Bihar. Structural equation modeling (SEM) and analysis of variance (ANOVA) methods are used to analyze the data. The findings of the study reveal that the role(s) of the community support though is found to be more significant in comparison to the government supports for its stand by position in rescue and livelihood resilience of the affected households whereas the government support arrives late and in far less quantity than what is required. However, the government's support is equally vital due its control over resources, which essentially needed in rescue and rehabilitation of the affected households. The study unravels the strategic value of households' indigenous knowledge and their flood experience in livelihood resilience.Keywords: flood education, flood experience, livelihood resilience, community support, government support
Procedia PDF Downloads 50727618 Visibility of the Borders of the Mandibular Canal: A Comparative in Vitro Study Using Digital Panoramic Radiography, Reformatted Panoramic Radiography and Cross Sectional Cone Beam Computed Tomography
Authors: Keerthilatha Pai, Sakshi Kamra
Abstract:
Objectives: Determining the position of the mandibular canal prior to implant placement and surgeries of the posterior mandible are important to avoid the nerve injury. The visibility of the mandibular canal varies according to the imaging modality. Although panoramic radiography is the most common, slowly cone beam computed tomography is replacing it. This study was conducted with an aim to determine and compare the visibility of superior and inferior borders of the mandibular canal in digital panoramic radiograph, reformatted panoramic radiograph and cross-sectional images of cone beam computed tomography. Study design: digital panoramic, reformatted panoramic radiograph and cross sectional CBCT images of 25 human mandibles were evaluated for the visibility of the superior and inferior borders of the mandibular canal according to a 5 point scoring criteria. Also, the canal was evaluated as completely visible, partially visible and not visible. The mean scores and visibility percentage of all the imaging modalities were determined and compared. The interobserver and intraobserver agreement in the visualization of the superior and inferior borders of the mandibular canal were determined. Results: The superior and inferior borders of the mandibular canal were completely visible in 47% of the samples in digital panoramic, 63% in reformatted panoramic and 75.6% in CBCT cross-sectional images. The mandibular canal was invisible in 24% of samples in digital panoramic, 19% in reformatted panoramic and 2% in cross-sectional CBCT images. Maximum visibility was seen in Zone 5 and least visibility in Zone 1. On comparison of all the imaging modalities, CBCT cross-sectional images showed better visibility of superior border in Zones 2,3,4,6 and inferior border in Zones 2,3,4,6. The difference was statistically significant. Conclusion: CBCT cross-sectional images were much superior in the visualization of the mandibular canal in comparison to reformatted and digital panoramic radiographs. The inferior border was better visualized in comparison to the superior border in digital panoramic imaging. The mandibular canal was maximumly visible in posterior one-third region of the mandible and the visibility decreased towards the mental foramen.Keywords: cone beam computed tomography, mandibular canal, reformatted panoramic radiograph, visualization
Procedia PDF Downloads 12727617 A Comparison between Modelled and Actual Thermal Performance of Load Bearing Rammed Earth Walls in Egypt
Authors: H. Hafez, A. Mekkawy, R. Rostom
Abstract:
Around 10% of the world’s CO₂ emissions could be attributed to the operational energy of buildings; that is why more research is directed towards the use of rammed earth walls which is claimed to have enhanced thermal properties compared to conventional building materials. The objective of this paper is to outline how the thermal performance of rammed earth walls compares to conventional reinforced concrete skeleton and red brick in-fill walls. For this sake, the indoor temperature and relative humidity of a classroom built with rammed earth walls and a vaulted red brick roof in the area of Behbeit, Giza, Egypt were measured hourly over 6 months using smart sensors. These parameters for the rammed earth walls were later also compared against the values obtained using a 'DesignBuilder v5' model to verify the model assumptions. The thermal insulation of rammed earth walls was found to be 30% better than this of the redbrick infill, and the recorded data were found to be almost 90% similar to the modelled values.Keywords: rammed earth, thermal insulation, indoor air quality, design builder
Procedia PDF Downloads 14627616 Numerical Simulation of the Air Pollutants Dispersion Emitted by CPH Using ANSYS CFX
Authors: Oliver Mărunţălu, Gheorghe Lăzăroiu, Elena Elisabeta Manea, Dana Andreya Bondrea, Lăcrămioara Diana Robescu
Abstract:
This paper presents the results obtained by numerical simulation of the pollutants dispersion in the atmosphere coming from the evacuation of combustion gases resulting from the fuel combustion used by electric thermal power plant using the software ANSYS CFX-CFD. The model uses the Navier-Stokes equation to simulate the dispersion of pollutants in the atmosphere. We considered as important factors in elaboration of simulation the atmospheric conditions (pressure, temperature, wind speed, wind direction), the exhaust velocity of the combustion gases, chimney height and the obstacles (buildings). Using the air quality monitoring stations we have measured the concentrations of main pollutants (SO2, NOx and PM). The pollutants were monitored over a period of 3 months, after that we calculated the average concentration, which is used by the software. The concentrations are: 8.915 μg/m3 (NOx), 9.587 μg/m3 (SO2) and 42 μg/m3 (PM). A comparison of test data with simulation results demonstrated that CFX was able to describe the dispersion of the pollutant as well the concentration of this pollutants in the atmosphere.Keywords: air pollutants, computational fluid dynamics, dispersion, simulation
Procedia PDF Downloads 45727615 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 24427614 Comparison of Different Activators Impact on the Alkali-Activated Aluminium-Silicate Composites
Authors: Laura Dembovska, Ina Pundiene, Diana Bajare
Abstract:
Alkali-activated aluminium-silicate composites (AASC) can be used in the production of innovative materials with a wide range of properties and applications. AASC are associated with low CO₂ emissions; in the production process, it is possible to use industrial by-products and waste, thereby minimizing the use of a non-renewable natural resource. This study deals with the preparation of heat-resistant porous AASC based on chamotte for high-temperature applications up to 1200°C. Different fillers, aluminium scrap recycling waste as pores forming agent and alkali activation with 6M sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution were used. Sodium hydroxide (NaOH) is widely used for the synthesis of AASC compared to potassium hydroxide (KOH), but comparison of using different activator for geopolymer synthesis is not well established. Changes in chemical composition of AASC during heating were identified and quantitatively analyzed by using DTA, dimension changes during the heating process were determined by using HTOM, pore microstructure was examined by SEM, and mineralogical composition of AASC was determined by XRD. Lightweight porous AASC activated with NaOH have been obtained with density in range from 600 to 880 kg/m³ and compressive strength from 0.8 to 2.7 MPa, but for AAM activated with KOH density was in range from 750 to 850 kg/m³ and compressive strength from 0.7 to 2.1 MPa.Keywords: alkali activation, alkali activated materials, elevated temperature application, heat resistance
Procedia PDF Downloads 17827613 Demand for Care in Primary Health Care in the Governorate of Ariana: Results of a Survey in Ariana Primary Health Care and Comparison with the Last 30 Years
Authors: Chelly Souhir, Harizi Chahida, Hachaichi Aicha, Aissaoui Sihem, Chahed Mohamed Kouni
Abstract:
Introduction: In Tunisia, few studies have attempted to describe the demand for primary care in a standardized and systematic way. The purpose of this study is to describe the main reasons for demand for care in primary health care, through a survey of the Ariana Governorate PHC and to identify their evolutionary trend compared to last 30 years, reported by studies of the same type. Materials and methods: This is a cross-sectional descriptive study which concerns the study of consultants in the first line of the governorate of Ariana and their use of care recorded during 2 days in the same week during the month of May 2016, in each of these PHC. The same data collection sheet was used in all CSBs. The coding of the information was done according to the International Classification of Primary Care (ICPC). The data was entered and analyzed by the EPI Info 7 software. Results: Our study found that the most common ICPC chapters are respiratory (42%) and digestive (13.2%). In 1996 were the respiratory (43.5%) and circulatory (7.8%). In 2000, we found also the respiratory (39,6%) and circulatory (10,9%). In 2002, respiratory (43%) and digestive (10.1%) motives were the most frequent. According to the ICPC, the pathologies in our study were acute angina (19%), acute bronchitis and bronchiolitis (8%). In 1996, it was tonsillitis ( 21.6%) and acute bronchitis (7.2%). For Ben Abdelaziz in 2000, tonsillitis (14.5%) follow by acute bronchitis (8.3%). In 2002, acute angina (15.7%), acute bronchitis and bronchiolitis (11.2%) were the most common. Conclusion: Acute angina and tonsillitis are the most common in all studies conducted in Tunisia.Keywords: acute angina, classification of primary care, primary health care, tonsillitis, Tunisia
Procedia PDF Downloads 53127612 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.Keywords: forecasting, generalized extreme value (GEV), meteorology, return level
Procedia PDF Downloads 47827611 Testing of Gas Turbine KingTech with Biodiesel
Authors: Nicolas Lipchak, Franco Aiducic, Santiago Baieli
Abstract:
The present work is a part of the research project called ‘Testing of gas turbine KingTech with biodiesel’, carried out by the Department of Industrial Engineering of the National Technological University at Buenos Aires. The research group aims to experiment with biodiesel in a gas turbine Kingtech K-100 to verify the correct operation of it. In this sense, tests have been developed to obtain real data of parameters inherent to the work cycle, to be used later as parameters of comparison and performance analysis. In the first instance, the study consisted in testing the gas turbine with a mixture composition of 50% Biodiesel and 50% Diesel. The parameters arising from the measurements made were compared with the parameters of the gas turbine with a composition of 100% Diesel. In the second instance, the measured parameters were used to calculate the power generated and the thermal efficiency of the Kingtech K-100 turbine. The turbine was also inspected to verify the status of the internals due to the use of biofuels. The conclusions obtained allow empirically demonstrate that it is feasible to use biodiesel in this type of gas turbines, without the use of this fuel generates a loss of power or degradation of internals.Keywords: biodiesel, efficiency, KingTech, turbine
Procedia PDF Downloads 24527610 Impact of Stack Caches: Locality Awareness and Cost Effectiveness
Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang
Abstract:
Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.Keywords: hit rate, locality of program, stack cache, stack data
Procedia PDF Downloads 30327609 Modelling of Moisture Loss and Oil Uptake during Deep-Fat Frying of Plantain
Authors: James A. Adeyanju, John O. Olajide, Akinbode A. Adedeji
Abstract:
A predictive mathematical model based on the fundamental principles of mass transfer was developed to simulate the moisture content and oil content during Deep-Fat Frying (DFF) process of dodo. The resulting governing equation, that is, partial differential equation that describes rate of moisture loss and oil uptake was solved numerically using explicit Finite Difference Technique (FDT). Computer codes were written in MATLAB environment for the implementation of FDT at different frying conditions and moisture loss as well as oil uptake simulation during DFF of dodo. Plantain samples were sliced into 5 mm thickness and fried at different frying oil temperatures (150, 160 and 170 ⁰C) for periods varying from 2 to 4 min. The comparison between the predicted results and experimental data for the validation of the model showed reasonable agreement. The correlation coefficients between the predicted and experimental values of moisture and oil transfer models ranging from 0.912 to 0.947 and 0.895 to 0.957, respectively. The predicted results could be further used for the design, control and optimization of deep-fat frying process.Keywords: frying, moisture loss, modelling, oil uptake
Procedia PDF Downloads 44827608 Discrete Crack Modeling of Side Face FRP-Strengthened Concrete Beam
Authors: Shahriar Shahbazpanahi, Mohammad Hemen Jannaty, Alaleh Kamgar
Abstract:
Shear strengthening can be carried out in concrete structures by external fibre reinforced polymer (FRP). In the present investigation, a new fracture mechanics model is developed to model side face of strengthened concrete beam by external FRP. Discrete crack is simulated by a spring element with softening behavior ahead of the crack tip to model the cohesive zone in concrete. A truss element is used, parallel to the spring element, to simulate the energy dissipation rate by the FRP. The strain energy release rate is calculated directly by using a virtual crack closure technique and then, the crack propagation criterion is presented. The results are found acceptable when compared to previous experimental results and ABAQUS software data. It is observed that the length of the fracture process zone (FPZ) increases with the application of FRP in side face at the same load in comparison with that of the control beam.Keywords: FPZ, fracture, FRP, shear
Procedia PDF Downloads 53427607 Autonomic Threat Avoidance and Self-Healing in Database Management System
Authors: Wajahat Munir, Muhammad Haseeb, Adeel Anjum, Basit Raza, Ahmad Kamran Malik
Abstract:
Databases are the key components of the software systems. Due to the exponential growth of data, it is the concern that the data should be accurate and available. The data in databases is vulnerable to internal and external threats, especially when it contains sensitive data like medical or military applications. Whenever the data is changed by malicious intent, data analysis result may lead to disastrous decisions. Autonomic self-healing is molded toward computer system after inspiring from the autonomic system of human body. In order to guarantee the accuracy and availability of data, we propose a technique which on a priority basis, tries to avoid any malicious transaction from execution and in case a malicious transaction affects the system, it heals the system in an isolated mode in such a way that the availability of system would not be compromised. Using this autonomic system, the management cost and time of DBAs can be minimized. In the end, we test our model and present the findings.Keywords: autonomic computing, self-healing, threat avoidance, security
Procedia PDF Downloads 50427606 Information Extraction Based on Search Engine Results
Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk
Abstract:
The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.Keywords: search engines, information extraction, agent system
Procedia PDF Downloads 43027605 Thermal Properties and Water Vapor Permeability for Cellulose-Based Materials
Authors: Stanislavs Gendelis, Maris Sinka, Andris Jakovics
Abstract:
Insulation materials made from natural sources have become more popular for the ecologisation of buildings, meaning wide use of such renewable materials. Such natural materials replace synthetic products which consume a large quantity of energy. The most common and the cheapest natural materials in Latvia are cellulose-based (wood and agricultural plants). The ecological aspects of such materials are well known, but experimental data about physical properties remains lacking. In this study, six different samples of wood wool panels and a mixture of hemp shives and lime (hempcrete) are analysed. Thermal conductivity and heat capacity measurements were carried out for wood wool and cement panels using the calibrated hot plate device. Water vapor permeability was tested for hempcrete material by using the gravimetric dry cup method. Studied wood wool panels are eco-friendly and harmless material, which is widely used in the interior design of public and residential buildings, where noise absorption and sound insulation is of importance. They are also suitable for high humidity facilities (e.g., swimming pools). The difference in panels was the width of used wood wool, which is linked to their density. The results of measured thermal conductivity are in a wide range, showing the worsening of properties with the increasing of the wool width (for the least dense 0.066, for the densest 0.091 W/(m·K)). Comparison with mineral insulation materials shows that thermal conductivity for such materials are 2-3 times higher and are comparable to plywood and fibreboard. Measured heat capacity was in a narrower range; here, the dependence on the wool width was not so strong due to the fact that heat capacity value is related to mass, not volume. The resulting heat capacity is a combination of two main components. A comparison of results for different panels allows to select the most suitable sample for a specific application because the dependencies of the thermal insulation and heat capacity properties on the wool width are not the same. Hempcrete is a much denser material compared to conventional thermal insulating materials. Therefore, its use helps to reinforce the structural capacity of the constructional framework, at the same time, it is lightweight. By altering the proportions of the ingredients, hempcrete can be produced as a structural, thermal, or moisture absorbent component. The water absorption and water vapor permeability are the most important properties of these materials. Information about absorption can be found in the literature, but there are no data about water vapor transmission properties. Water vapor permeability was tested for a sample of locally made hempcrete using different air humidity values to evaluate the possible difference. The results show only the slight influence of the air humidity on the water vapor permeability value. The absolute ‘sd value’ measured is similar to mineral wool and wood fiberboard, meaning that due to very low resistance, water vapor passes easily through the material. At the same time, other properties – structural and thermal of the hempcrete is totally different. As a result, an experimentally-based knowledge of thermal and water vapor transmission properties for cellulose-based materials was significantly improved.Keywords: heat capacity, hemp concrete, thermal conductivity, water vapor transmission, wood wool
Procedia PDF Downloads 22127604 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography
Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya
Abstract:
In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography
Procedia PDF Downloads 29027603 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India
Authors: Anushtha Saxena
Abstract:
This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.Keywords: data monetization, e-commerce companies, regulatory framework, GDPR
Procedia PDF Downloads 12027602 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers
Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi
Abstract:
Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers
Procedia PDF Downloads 18727601 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor
Authors: Yash Jain
Abstract:
The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier
Procedia PDF Downloads 16327600 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 13327599 Contact Temperature of Sliding Surfaces in AISI 316 Austenitic Stainless Steel During PIN on Disk Dry Wear Testing
Authors: Dler Abdullah Ahmed, Zozan Ahmed Mohammed
Abstract:
This study looked into contact surface temperature during a pin-on-disk test. Friction and wear between sliding surfaces raised the temperature differential between the contact surface and ambient temperatures Tdiff. Tdiff was significantly influenced by wear test variables. Tdiff rose with the increase of sliding speed and applied load while dropped with the increase in ambient temperature. The highest Tdiff was 289°C during the tests at room temperature and 2.5 m/s sliding speed, while the minimum was only 24 °C during the tests at 400°C and 0.5 m/s. However, the maximum contact temperature Tmax was found during tests conducted at high ambient temperatures. The Tmax was estimated based on the theoretical equation. The comparison of experimental and theoretical Tmax data revealed good agreement.Keywords: pin on disk test, contact temperature, wear, sliding surface, friction, ambient temperature
Procedia PDF Downloads 8227598 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 19927597 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm
Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian
Abstract:
The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool
Procedia PDF Downloads 43627596 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security
Authors: Kenneth Harper
Abstract:
Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs
Procedia PDF Downloads 1927595 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 23027594 Comparison of Risk and Return on Trading and Profit Sharing Based Financing Contract in Indonesian Islamic Bank
Authors: Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum, Imron Mawardi, Achsania Hendratmi
Abstract:
Murabaha is the most popular contract by the Islamic banks in Indonesia, since there is opinion stating that the risk level of mudharaba and musyaraka are higher and the return is uncertain. This research aims to analyze the difference of return, risk, and variation coefficient between profit sharing-based and trading-based financing in Islamic bank. This research uses quantitative approach using Wilcoxon signed rank test with data sampled from 13 Indonesian Islamic banks, collected from their quarterly financial reports from 2011 to 2015. The result shows the significant difference in return, while risk and variation coefficient are almost same. From the analysis, it can be concluded that profit sharing-based financing is less desirable not because of its risk. Trading-based financing is more desirable than the profit sharing because of its return.Keywords: financing, Islamic bank, return, risk
Procedia PDF Downloads 378