Search results for: data comparison
27957 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data
Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah
Abstract:
At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.Keywords: Semantic Web, linked open data, database, statistic
Procedia PDF Downloads 17527956 Influence of Biological and Chemical Fertilizers on Quantitative Characteristics of Sweet Wormwood
Authors: Anahita Yarahmadi, Nazanin Mahboobi, Nahid Sadat Rahmatpour Nori, Mohammad Hossein Bijeh Keshavarzi, Mohammad Javad Shakori
Abstract:
This research aimed at considering biological fertilizer effect and chemical fertilizer on the quantitative characteristics of Sweet wormwood (Artemisia annua L.), an experiment was carried out in factorial design in completely randomized design with 4 replications in an experimental greenhouse which was located in Tehran. Experimental treatment involved chemical fertilizers (Nitrogen, Phosphorus) in4 levels and biological fertilizers in 4 levels (control, Nitroxin, Bio-phosphorus and Vemricompost). Results showed that using biological fertilizers and increasing different levels of chemical fertilizers (N, P) had significant effects on all the characteristics. Considering means comparison showed that biological fertilizers lead to significant enhancement on all the characteristics and among biological fertilizers, Vermicompost treatment has the most effect. Considering means comparison tables of different levels of chemical fertilizer have been found that (N80P80) had the most increase on characteristics.Keywords: Artemisia annua L, bio-fertilizer, chemical fertilizer, vermicompost
Procedia PDF Downloads 45527955 A Photoredox (C)sp³-(C)sp² Coupling Method Comparison Study
Authors: Shasline Gedeon, Tiffany W. Ardley, Ying Wang, Nathan J. Gesmundo, Katarina A. Sarris, Ana L. Aguirre
Abstract:
Drug discovery and delivery involve drug targeting, an approach that helps find a drug against a chosen target through high throughput screening and other methods by way of identifying the physical properties of the potential lead compound. Physical properties of potential drug candidates have been an imperative focus since the unveiling of Lipinski's Rule of 5 for oral drugs. Throughout a compound's journey from discovery, clinical phase trials, then becoming a classified drug on the market, the desirable properties are optimized while minimizing/eliminating toxicity and undesirable properties. In the pharmaceutical industry, the ability to generate molecules in parallel with maximum efficiency is a substantial factor achieved through sp²-sp² carbon coupling reactions, e.g., Suzuki Coupling reactions. These reaction types allow for the increase of aromatic fragments onto a compound. More recent literature has found benefits to decreasing aromaticity, calling for more sp³-sp² carbon coupling reactions instead. The objective of this project is to provide a comparison between various sp³-sp² carbon coupling methods and reaction conditions, collecting data on production of the desired product. There were four different coupling methods being tested amongst three cores and 4-5 installation groups per method; each method ran under three distinct reaction conditions. The tested methods include the Photoredox Decarboxylative Coupling, the Photoredox Potassium Alkyl Trifluoroborate (BF3K) Coupling, the Photoredox Cross-Electrophile (PCE) Coupling, and the Weix Cross-Electrophile (WCE) Coupling. The results concluded that the Decarboxylative method was very difficult in yielding product despite the several literature conditions chosen. The BF3K and PCE methods produced competitive results. Amongst the two Cross-Electrophile coupling methods, the Photoredox method surpassed the Weix method on numerous accounts. The results will be used to build future libraries.Keywords: drug discovery, high throughput chemistry, photoredox chemistry, sp³-sp² carbon coupling methods
Procedia PDF Downloads 14427954 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges
Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh
Abstract:
For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.Keywords: guideline, law, data protection officer, personal data
Procedia PDF Downloads 7827953 Comparison of Radiated Emissions in Offshore and Onshore Wind Turbine Towers
Authors: Sajeesh Sulaiman, Gomathisankar A., Aravind Devaraj, Aswin R., Vijay Kumar G., Rachana Raj
Abstract:
Wind turbines are the next big answer to the emerging and ever-growing demand for electricity, and this need is increasing day by day. These high mast structures, whether on land or on the sea, has also become one of the big sources of electromagnetic interferences (EMI) in the not so distant past. With the emergence of the AC-AC converter and drawing of large power cables through the wind turbine towers has made this clean and efficient source of renewable energy to become one of the culprits in creating electromagnetic interference. This paper will present the sources of such EMIs, a comparison of radiated emissions (both electric and magnetic field) patterns in wind turbine towers for both onshore and offshore wind turbines and close look into the IEC 61400-40 (new standard for EMC design on wind turbine). At present, offshore wind turbines are tested in onshore facilities. This paper will present the anomaly in results for offshore wind turbines when tested in onshore, which the existing standards and the upcoming standards have failed to address.Keywords: emissions, electric field, magnetic field, wind turbine, tower, standards and regulations
Procedia PDF Downloads 24827952 Volume Density of Power of Multivector Electric Machine
Authors: Aldan A. Sapargaliyev, Yerbol A. Sapargaliyev
Abstract:
Since the invention, the electric machine (EM) can be defined as oEM – one-vector electric machine, as it works due to one-vector inductive coupling with use of one-vector electromagnet. The disadvantages of oEM are large size and limited efficiency at low and medium power applications. This paper describes multi-vector electric machine (mEM) based on multi-vector inductive coupling, which is characterized by the increased surface area of the inductive coupling per EM volume, with a reduced share of inefficient and energy-consuming part of the winding, in comparison with oEM’s. Particularly, it is considered, calculated and compared the performance of three different electrical motors and their power at the same volumes and rotor frequencies. It is also presented the result of calculation of correlation between power density and volume for oEM and mEM. The method of multi-vector inductive coupling enables mEM to possess 1.5-4.0 greater density of power per volume and significantly higher efficiency, in comparison with today’s oEM, especially in low and medium power applications. mEM has distinct advantages, when used in transport vehicles such as electric cars and aircrafts.Keywords: electric machine, electric motor, electromagnet, efficiency of electric motor
Procedia PDF Downloads 33827951 Variable Selection in a Data Envelopment Analysis Model by Multiple Proportions Comparison
Authors: Jirawan Jitthavech, Vichit Lorchirachoonkul
Abstract:
A statistical procedure using multiple comparisons test for proportions is proposed for variable selection in a data envelopment analysis (DEA) model. The test statistic in the multiple comparisons is the proportion of efficient decision making units (DMUs) in a DEA model. Three methods of multiple comparisons test for proportions: multiple Z tests with Bonferroni correction, multiple tests in 2Xc crosstabulation and the Marascuilo procedure, are used in the proposed statistical procedure of iteratively eliminating the variables in a backward manner. Two simulation populations of moderately and lowly correlated variables are used to compare the results of the statistical procedure using three methods of multiple comparisons test for proportions with the hypothesis testing of the efficiency contribution measure. From the simulation results, it can be concluded that the proposed statistical procedure using multiple Z tests for proportions with Bonferroni correction clearly outperforms the proposed statistical procedure using the remaining two methods of multiple comparisons and the hypothesis testing of the efficiency contribution measure.Keywords: Bonferroni correction, efficient DMUs, Marascuilo procedure, Pastor et al. method, 2xc crosstabulation
Procedia PDF Downloads 31027950 Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images
Authors: Paulo Cesar Pereira Junior, Alexandre Monteiro, Rafael da Luz Ribeiro, Antonio Carlos Sobieranski, Aldo von Wangenheim
Abstract:
In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models.Keywords: convolutional neural networks, deep learning, digital image processing, precision agriculture, semantic segmentation, unmanned aerial vehicles
Procedia PDF Downloads 26027949 Determination of Optimum Water Consumptive Using Deficit Irrigation Model for Barely: A Case Study in Arak, Iran
Authors: Mohsen Najarchi
Abstract:
This research was carried out in five fields (5-15 hectares) in Arak located in center of Iran, to determine optimum level of water consumed for Barely in four stages growth (vegetative, yield formation, flowering, and ripening). Actual evapotranspiration was calculated using measured water requirement in the fields. Five levels of water requirement equal to 50, 60, 70, 80, and 90 percents formed the treatments. To determine the optimum level of water requirement linear programming was used. The study showed 60 percent water requirement (40 percent deficit irrigation) has been the optimum level of irrigation for winter wheat in four stages of growth. Comparison between all of the treatments indicated above with normal condition (100% water requirement) shows increasing in water use efficiency. Although 40% deficit irrigation treatment lead to decrease of 38% in yield, net benefit was increasing in 11.37%. Furthermore, in comparison with normal condition, 70% of water requirement increased water use efficiency as 30%.Keywords: optimum, deficit irrigation, water use efficiency, evapotranspiration
Procedia PDF Downloads 39627948 Comparison of Process Slaughtered on Beef Cattle Based on Level of Cortisol and Fourier Transform Infrared Spectroscopy (FTIR)
Authors: Pudji Astuti, C. P. C. Putro, C. M. Airin, L. Sjahfirdi, S. Widiyanto, H. Maheshwari
Abstract:
Stress of slaughter animals starting long before until at the time of process of slaughtering which cause misery and decrease of meat quality. Meanwhile, determination of animal stress using hormonal such as cortisol is expensive and less practical so that portable stress indicator for cows based on Fourier Transform Infrared Spectroscopy (FTIR) must be provided. The aims of this research are to find out the comparison process of slaughter between Rope Casting Local (RCL) and Restraining Box Method (RBM) by measuring of cortisol and wavelength in FTIR methods. Thirty two of male Ongole crossbred cattle were used in this experiment. Blood sampling was taken from jugular vein when they were rested and repeated when slaughtered. All of blood samples were centrifuged at 3000 rpm for 20 minutes to get serum, and then divided into two parts for cortisol assayed using ELISA and for measuring the wavelength using FTIR. The serum then measured at the wavelength between 4000-400 cm-1 using MB3000 FTIR. Band data absorption in wavelength of FTIR is analyzed descriptively by using FTIR Horizon MBTM. For RCL, average of serum cortisol when the animals rested were 11.47 ± 4.88 ng/mL, when the time of slaughter were 23.27 ± 7.84 ng/mL. For RBM, level of cortisol when rested animals were 13.67 ± 3.41 ng/mL and 53.47 ± 20.25 ng/mL during the slaughter. Based on student t-Test, there were significantly different between RBM and RCL methods when beef cattle were slaughtered (P < 0.05), but no significantly different when animals were rested (P > 0.05). Result of FTIR with the various of wavelength such as methyl group (=CH3) 2986cm-1, methylene (=CH2) 2827 cm-1, hydroxyl (-OH) 3371 cm-1, carbonyl (ketones) (C=O) 1636 cm-1, carboxyl (COO-1) 1408 cm-1, glucosa 1057 cm-1, urea 1011 cm-1have been obtained. It can be concluded that the RCL slaughtered method is better than the RBM method based on the increase of cortisol as an indicator of stress in beef cattle (P<0.05). FTIR is really possible to be used as stub of stress tool due to differentiate of resting and slaughter condition by recognizing the increase of absorption and the separation of component group at the wavelength.Keywords: cows, cortisol, FTIR, RBM, RCL, stress indicator
Procedia PDF Downloads 64127947 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA
Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray
Abstract:
Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration
Procedia PDF Downloads 6627946 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies
Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala
Abstract:
The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies
Procedia PDF Downloads 10827945 An Econometric Analysis of the Flat Tax Revolution
Authors: Wayne Tarrant, Ethan Petersen
Abstract:
The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient
Procedia PDF Downloads 33927944 The Different Improvement of Numerical Magnitude and Spatial Representation of Numbers to Symbolic Approximate Arithmetic: A Training Study of Preschooler
Abstract:
Spatial representation of numbers and numerical magnitude are important for preschoolers’ mathematical ability. Mental number line, a typical index to measure numbers spatial representation, and numerical comparison are both related to arithmetic obviously. However, they seem to rely on different mechanisms and probably influence arithmetic through different mechanisms. In line with this idea, preschool children were trained with two tasks to investigate which one is more important for approximate arithmetic. The training of numerical processing and number line estimation were proved to be effective. They both improved the ability of approximate arithmetic. When the difficulty of approximate arithmetic was taken into account, the performance in number line training group was not significantly different among three levels. However, two harder levels achieved significance in numerical comparison training group. Thus, comparing spatial representation ability, symbolic approximation arithmetic relies more on numerical magnitude. Educational implications of the study were discussed.Keywords: approximate arithmetic, mental number line, numerical magnitude, preschooler
Procedia PDF Downloads 25227943 Numerical Simulation of Different Enhanced Oil Recovery (EOR) Scenarios on a Volatile Oil Reservoir
Authors: Soheil Tavakolpour
Abstract:
Enhance Oil Recovery (EOR) can be considered as an undeniable action in reservoirs life period. Different kind of EOR methods are available, but suitable EOR method depends on reservoir properties, like rock and fluid properties. In this paper, we nominated fifth SPE’s Comparative Solution Projects (CSP) for testing different scenarios. We used seven EOR scenarios for this reservoir and we simulated it for 10 years after 2 years production without any injection. The first scenario is waterflooding for whole of the 10 years period. The second scenario is gas injection for ten years. The third scenario is Water-Alternation-Gas (WAG). In the next scenario, water injected for 4 years before starting WAG injection for the next 6 years. In the fifth scenario, water injected after 6 years WAG injection for 4 years. For sixth and last scenarios, all the things are similar to fourth and fifth scenarios, but gas injected instead of water. Results show that fourth scenario was the most efficient method for 10 years EOR, but it resulted very high water production. Fifth scenario was efficient too, with little water production in comparison to the fourth scenario. Gas injection was not economically attractive. In addition to high gas production, it produced less oil in comparison to other scenarios.Keywords: WAG, SPE’s comparative solution projects, numerical simulation, EOR scenarios
Procedia PDF Downloads 43427942 Federated Learning in Healthcare
Authors: Ananya Gangavarapu
Abstract:
Convolutional Neural Networks (CNN) based models are providing diagnostic capabilities on par with the medical specialists in many specialty areas. However, collecting the medical data for training purposes is very challenging because of the increased regulations around data collections and privacy concerns around personal health data. The gathering of the data becomes even more difficult if the capture devices are edge-based mobile devices (like smartphones) with feeble wireless connectivity in rural/remote areas. In this paper, I would like to highlight Federated Learning approach to mitigate data privacy and security issues.Keywords: deep learning in healthcare, data privacy, federated learning, training in distributed environment
Procedia PDF Downloads 14127941 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities
Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob
Abstract:
Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.Keywords: BIM, building fire response, ranking, visualization
Procedia PDF Downloads 13327940 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 38927939 The Utilization of Big Data in Knowledge Management Creation
Authors: Daniel Brian Thompson, Subarmaniam Kannan
Abstract:
The huge weightage of knowledge in this world and within the repository of organizations has already reached immense capacity and is constantly increasing as time goes by. To accommodate these constraints, Big Data implementation and algorithms are utilized to obtain new or enhanced knowledge for decision-making. With the transition from data to knowledge provides the transformational changes which will provide tangible benefits to the individual implementing these practices. Today, various organization would derive knowledge from observations and intuitions where this information or data will be translated into best practices for knowledge acquisition, generation and sharing. Through the widespread usage of Big Data, the main intention is to provide information that has been cleaned and analyzed to nurture tangible insights for an organization to apply to their knowledge-creation practices based on facts and figures. The translation of data into knowledge will generate value for an organization to make decisive decisions to proceed with the transition of best practices. Without a strong foundation of knowledge and Big Data, businesses are not able to grow and be enhanced within the competitive environment.Keywords: big data, knowledge management, data driven, knowledge creation
Procedia PDF Downloads 11627938 Four-dimensional (4D) Decoding Information Presented in Reports of Project Progress in Developing Countries
Authors: Vahid Khadjeh Anvary, Hamideh Karimi Yazdi
Abstract:
Generally, the tool of comparison between performance of each stage in the life of a project, is the number of project progress during that period, which in most cases is only determined as one-dimensional with referring to one of three factors (physical, time, and financial). In many projects in developing countries there are controversies on accuracy and the way of analyzing progress report of projects that hinders getting definitive and engineering conclusions on the status of project.Identifying weakness points of this kind of one-dimensional look on project and determining a reliable and engineering approach for multi-dimensional decoding information receivable from project is of great importance in project management.This can be a tool to help identification of hidden diseases of project before appearing irreversible symptoms that are usually delays or increased costs of execution. The method used in this paper is defining and evaluating a hypothetical project as an example analyzing different scenarios and numerical comparison of them along with related graphs and tables. Finally, by analyzing different possible scenarios in the project, possibility or impossibility of predicting their occurrence is examine through the evidence.Keywords: physical progress, time progress, financial progress, delays, critical path
Procedia PDF Downloads 37427937 A Comparison of South East Asian Face Emotion Classification based on Optimized Ellipse Data Using Clustering Technique
Authors: M. Karthigayan, M. Rizon, Sazali Yaacob, R. Nagarajan, M. Muthukumaran, Thinaharan Ramachandran, Sargunam Thirugnanam
Abstract:
In this paper, using a set of irregular and regular ellipse fitting equations using Genetic algorithm (GA) are applied to the lip and eye features to classify the human emotions. Two South East Asian (SEA) faces are considered in this work for the emotion classification. There are six emotions and one neutral are considered as the output. Each subject shows unique characteristic of the lip and eye features for various emotions. GA is adopted to optimize irregular ellipse characteristics of the lip and eye features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse based fitness equations are proposed for the lip configuration and relevant parameters that define the emotions are listed. The GA method has achieved reasonably successful classification of emotion. In some emotions classification, optimized data values of one emotion are messed or overlapped to other emotion ranges. In order to overcome the overlapping problem between the emotion optimized values and at the same time to improve the classification, a fuzzy clustering method (FCM) of approach has been implemented to offer better classification. The GA-FCM approach offers a reasonably good classification within the ranges of clusters and it had been proven by applying to two SEA subjects and have improved the classification rate.Keywords: ellipse fitness function, genetic algorithm, emotion recognition, fuzzy clustering
Procedia PDF Downloads 54627936 Prediction of Bariatric Surgery Publications by Using Different Machine Learning Algorithms
Authors: Senol Dogan, Gunay Karli
Abstract:
Identification of relevant publications based on a Medline query is time-consuming and error-prone. An all based process has the potential to solve this problem without any manual work. To the best of our knowledge, our study is the first to investigate the ability of machine learning to identify relevant articles accurately. 5 different machine learning algorithms were tested using 23 predictors based on several metadata fields attached to publications. We find that the Boosted model is the best-performing algorithm and its overall accuracy is 96%. In addition, specificity and sensitivity of the algorithm is 97 and 93%, respectively. As a result of the work, we understood that we can apply the same procedure to understand cancer gene expression big data.Keywords: prediction of publications, machine learning, algorithms, bariatric surgery, comparison of algorithms, boosted, tree, logistic regression, ANN model
Procedia PDF Downloads 21027935 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya
Authors: Masese Chuma Benard, Martin Onsiro Ronald
Abstract:
Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)
Procedia PDF Downloads 8527934 Cloud Design for Storing Large Amount of Data
Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás
Abstract:
Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization
Procedia PDF Downloads 35327933 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study
Authors: Jakkireddy Sravani
Abstract:
Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol
Procedia PDF Downloads 2527932 Estimation of Missing Values in Aggregate Level Spatial Data
Authors: Amitha Puranik, V. S. Binu, Seena Biju
Abstract:
Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis
Procedia PDF Downloads 38227931 Development of an Automatic Monitoring System Based on the Open Architecture Concept
Authors: Andrii Biloshchytskyi, Serik Omirbayev, Alexandr Neftissov, Sapar Toxanov, Svitlana Biloshchytska, Adil Faizullin
Abstract:
Kazakhstan has adopted a carbon neutrality strategy until 2060. In accordance with this strategy, it is necessary to introduce various tools to maintain the environmental safety of the environment. The use of IoT, in combination with the characteristics and requirements of Kazakhstan's environmental legislation, makes it possible to develop a modern environmental monitoring system. The article proposes a solution for developing an example of an automated system for the continuous collection of data on the concentration of pollutants in the atmosphere based on an open architecture. The Audino-based device acts as a microcontroller. It should be noted that the transmission of measured values is carried out via an open wireless communication protocol. The architecture of the system, which was used to build a prototype based on sensors, an Arduino microcontroller, and a wireless data transmission module, is presented. The selection of elementary components may change depending on the requirements of the system; the introduction of new units is limited by the number of ports. The openness of solutions allows you to change the configuration depending on the conditions. The advantages of the solutions are openness, low cost, versatility and mobility. However, there is no comparison of the working processes of the proposed solution with traditional ones.Keywords: environmental monitoring, greenhouse gases emissions, environmental pollution, Industry 4.0, IoT, microcontroller, automated monitoring system.
Procedia PDF Downloads 4827930 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 16927929 Estimation of Particle Size Distribution Using Magnetization Data
Authors: Navneet Kaur, S. D. Tiwari
Abstract:
Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism
Procedia PDF Downloads 14327928 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 77