Search results for: data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25500

Search results for: data sets

24840 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 251
24839 A Critical Examination of the Iranian National Legal Regulation of the Ecosystem of Lake Urmia

Authors: Siavash Ostovar

Abstract:

The Iranian national Law on the Ramsar Convention (officially known as the Convention of International Wetlands and Aquatic Birds' Habitat Wetlands) was approved by the Senate and became a law in 1974 after the ratification of the National Council. There are other national laws with the aim of preservation of environment in the country. However, Lake Urmia which is declared a wetland of international importance by the Ramsar Convention in 1971 and designated a UNESCO Biosphere Reserve in 1976 is now at the brink of total disappearance due mainly to the climate change, water mismanagement, dam construction, and agricultural deficiencies. Lake Urmia is located in the north western corner of Iran. It is the third largest salt water lake in the world and the largest lake in the Middle East. Locally, it is designated as a National Park. It is, indeed, a unique lake both nationally and internationally. This study investigated how effective the national legal regulation of the ecosystem of Lake Urmia is in Iran. To do so, the Iranian national laws as Enforcement of Ramsar Convention in the country including three nationally established laws of (i) Five sets of laws for the programme of economic, social and cultural development of Islamic Republic of Iran, (ii) The Iranian Penal Code, (iii) law of conservation, restoration and management of the country were investigated. Using black letter law methods, it was revealed that (i) regarding the national five sets of laws; the benchmark to force the implementation of the legislations and policies is not set clearly. In other words, there is no clear guarantee to enforce these legislations and policies at the time of deviation and violation; (ii) regarding the Penal Code, there is lack of determining the environmental crimes, determining appropriate penalties for the environmental crimes, implementing those penalties appropriately, monitoring and training programmes precisely; (iii) regarding the law of conservation, restoration and management, implementation of this regulation is adjourned to preparation, announcement and approval of several categories of enactments and guidelines. In fact, this study used a national environmental catastrophe caused by drying up of Lake Urmia as an excuse to direct the attention to the weaknesses of the existing national rules and regulations. Finally, as we all depend on the natural world for our survival, this study recommended further research on every environmental issue including the Lake Urmia.

Keywords: conservation, environmental law, Lake Urmia, national laws, Ramsar Convention, water management, wetlands

Procedia PDF Downloads 196
24838 A Review on Intelligent Systems for Geoscience

Authors: R Palson Kennedy, P.Kiran Sai

Abstract:

This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.

Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science

Procedia PDF Downloads 131
24837 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 130
24836 Exploring the Issue of Occult Hypoperfusion in the Pre-Hospital Setting

Authors: A. Fordham, A. Hudson

Abstract:

Background: Studies have suggested 16-25% of normotensive trauma patients with no clinical signs of shock have abnormal lactate and BD readings evidencing shock; a phenomenon known as occult hypoperfusion (OH). In light of the scarce quantity of evidence currently documenting OH, this study aimed to identify the prevalence of OH in the pre-hospital setting and explore ways to improve its identification and management. Methods: A quantitative retrospective data analysis was carried out on 75 sets of patient records for trauma patients treated by Kent Surrey Sussex Air Ambulance Trust between November 2013 and October 2014. The KSS HEMS notes and subsequent ED notes were collected. Trends between patients’ SBP on the scene, whether or not they received PRBCs on the scene as well as lactate and BD readings in the ED were assessed. Patients’ KSS HEMS notes written by the HEMS crew were also reviewed and recorded. Results: -Suspected OH was identified in 7% of the patients who did not receive PRBCs in the pre-hospital phase. -SBP heavily influences the physicians’ decision of whether or not to transfuse PRBCs in the pre-hospital phase. Preliminary conclusions: OH is an under-studied and underestimated phenomenon. We suggest a prospective trial is carried out to evaluate whether detecting trauma patients’ tissue perfusion status in the pre-hospital phase using portable devices capable of measuring serum BD and/or lactate could aid more accurate detection and management of all haemorrhaging trauma patients, including patients with OH.

Keywords: occult hypoperfusion, PRBC transfusion, point of care testing, pre-hospital emergency medicine, trauma

Procedia PDF Downloads 357
24835 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 166
24834 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 432
24833 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 89
24832 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 152
24831 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 589
24830 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring

Authors: A. Degale Desta, Cheng Jian

Abstract:

Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.

Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning

Procedia PDF Downloads 156
24829 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 348
24828 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 184
24827 SiC Merged PiN and Schottky (MPS) Power Diodes Electrothermal Modeling in SPICE

Authors: A. Lakrim, D. Tahri

Abstract:

This paper sets out a behavioral macro-model of a Merged PiN and Schottky (MPS) diode based on silicon carbide (SiC). This model holds good for both static and dynamic electrothermal simulations for industrial applications. Its parameters have been worked out from datasheets curves by drawing on the optimization method: Simulated Annealing (SA) for the SiC MPS diodes made available in the industry. The model also adopts the Analog Behavioral Model (ABM) of PSPICE in which it has been implemented. The thermal behavior of the devices was also taken into consideration by making use of Foster’ canonical network as figured out from electro-thermal measurement provided by the manufacturer of the device.

Keywords: SiC MPS diode, electro-thermal, SPICE model, behavioral macro-model

Procedia PDF Downloads 404
24826 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 341
24825 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: decision tree, genetic algorithm, machine learning, software defect prediction

Procedia PDF Downloads 326
24824 Development of Sustainable Building Environmental Model (SBEM) in Hong Kong

Authors: Kwok W. Mui, Ling T. Wong, F. Xiao, Chin T. Cheung, Ho C. Yu

Abstract:

This study addresses a concept of the Sustainable Building Environmental Model (SBEM) developed to optimize energy consumption in air conditioning and ventilation (ACV) systems without any deterioration of indoor environmental quality (IEQ). The SBEM incorporates two main components: an adaptive comfort temperature control module (ACT) and a new carbon dioxide demand control module (nDCV). These two modules take an innovative approach to maintain satisfaction of the Indoor Environmental Quality (IEQ) with optimum energy consumption, they provide a rational basis of effective control. A total of 2133 sets of measurement data of indoor air temperature (Ta), relative humidity (Rh) and carbon dioxide concentration (CO2) were conducted in some Hong Kong offices to investigate the potential of integrating the SBEM. A simulation was used to evaluate the dynamic performance of the energy and air conditioning system with the integration of the SBEM in an air-conditioned building. It allows us make a clear picture of the control strategies and performed any pre-tuned of controllers before utilized in real systems. With the integration of SBEM, it was able to save up to 12.3% in simulation and 15% in field measurement of overall electricity consumption, and maintain the average carbon dioxide concentration within 1000ppm and occupant dissatisfaction in 20%.

Keywords: sustainable building environmental model (SBEM), adaptive comfort temperature (ACT), new demand control ventilation (nDCV), energy saving

Procedia PDF Downloads 633
24823 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data

Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim

Abstract:

Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.

Keywords: activity pattern, data fusion, smart-card, XGboost

Procedia PDF Downloads 240
24822 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 104
24821 Analysis of Decentralized on Demand Cross Layer in Cognitive Radio Ad Hoc Network

Authors: A. Sri Janani, K. Immanuel Arokia James

Abstract:

Cognitive radio ad hoc networks different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. Cognitive radio automatically detects available channels in wireless spectrum. This is a form of dynamic spectrum management. Cross-layer optimization is proposed, using this can allow far away secondary users can also involve into channel work. So it can increase the throughput and it will overcome the collision and time delay.

Keywords: cognitive radio, cross layer optimization, CR mesh network, heterogeneous spectrum, mesh topology, random routing optimization technique

Procedia PDF Downloads 355
24820 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: mutex task generation, data augmentation, meta-learning, text classification.

Procedia PDF Downloads 135
24819 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming

Authors: Milind Chaudhari, Suhail Balasinor

Abstract:

Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.

Keywords: big data, IoT, vertical farming, indoor farming

Procedia PDF Downloads 171
24818 Pregnancy and Birth Outcomes of Single versus Multiple Embryo Transfer in Gestational Surrogacy Arrangements: A Systematic Review

Authors: Jutharat Attawet, Alex Y. Wang, Cindy M. Farquhar, Elizabeth A. Sullivan

Abstract:

Background: Adverse maternal and perinatal outcomes of multiple pregnancies resulting from multiple embryo transfers (ET) has become significant concerns. This is particularly relevant for gestational carriers since they usually do not have infertility issues. Single embryo transfer (SET) therefore has been encouraged to assist reproductive technology (ART) practice in order to reduce multiple pregnancies. Objectives: This systematic review aims to investigate the pregnancy and birth outcomes of SET and multiple ET in surrogacy arrangements. Search methods: This study is a systematic review. Electronic databases were searched from CINAHL, Medline, Embase, Scopus and ProQuest for studies from 1980 to 2017. Cross-references and national ART reports were also manual searchings. Articles without restriction of English language and study types were accessed. Carrier cycles involving in SET and multiple ET were identified in database searching. The main outcome measures including clinical pregnancy, live delivery and multiple deliveries per gestational carrier cycle were compared between SET and multiple ET. Mantel-Haenzel risk ratios (RRs) with 95% confidence intervals (CIs), using the numbers of outcome events in SET and multiple ET of each study were calculated suing RevMan5.3. Outcomes: The search returned 97 articles of which 5 met the inclusion criteria. Approximately 50% of carrier cycles were transferred a single embryo and 50% were transferred more than one embryo. The clinical pregnancy rate (CPR) was 39% for SET and 53% for multiple ET, which was not significantly different with RR = 0.83 (95% CI: 0.67-1.03). The live delivery rate was 33% for SET and 57% for multiple ET which was not significantly different with RR = 0.78 (95% CI: 0.61-1.00). The multiple delivery rate per carrier was greater risks in the multiple ET carrier cycles (RR =0.4, 95% CI: 0.01-0.26). There were 104 sets of twins (including one set of twins selectively reduced from triplets to twins) and 1 set of triples in the multiple ET carrier cycle. In the SET carrier cycles, there were 2 sets of twins. Significance of the study: SET should be advocated among surrogate carriers to prevent multiple pregnancies and subsequent adverse outcomes for both carrier and baby. Surrogacy practice should be reviewed and surrogate carriers should be fully informed of the risk of adverse maternal and birth outcome of multiple pregnancies due to multiple embryo transfers.

Keywords: assisted reproduction, birth outcomes, carrier, gestational surrogacy, multiple embryo transfer, multiple pregnancy, pregnancy outcomes, single embryo transfer, surrogate mother, systematic review

Procedia PDF Downloads 401
24817 Biogas Production from Kitchen Waste for a Household Sustainability

Authors: Vuiswa Lucia Sethunya, Tonderayi Matambo, Diane Hildebrandt

Abstract:

South African’s informal settlements produce tonnes of kitchen waste (KW) per year which is dumped into the landfill. These landfill sites are normally located in close proximity to the household of the poor communities; this is a problem in which the young children from those communities end up playing in these landfill sites which may result in some health hazards because of methane, carbon dioxide and sulphur gases which are produced. To reduce this large amount of organic materials being deposited into landfills and to provide a cleaner place for those within the community especially the children, an energy conversion process such as anaerobic digestion of the organic waste to produce biogas was implemented. In this study, the digestion of various kitchen waste was investigated in order to understand and develop a system that is suitable for household use to produce biogas for cooking. Three sets of waste of different nutritional compositions were digested as per acquired in the waste streams of a household at mesophilic temperature (35ᵒC). These sets of KW were co-digested with cow dung (CW) at different ratios to observe the microbial behaviour and the system’s stability in a laboratory scale system. The gas chromatography-flame ionization detector analyses have been performed to identify and quantify the presence of organic compounds in the liquid samples from co-digested and mono-digested food waste. Acetic acid, propionic acid, butyric acid and valeric acid are the fatty acids which were studied. Acetic acid (1.98 g/L), propionic acid (0.75 g/L) and butyric acid (2.16g/L) were the most prevailing fatty acids. The results obtained from organic acids analysis suggest that the KW can be an innovative substituent to animal manure for biogas production. The faster degradation period in which the microbes break down the organic compound to produce the fatty acids during the anaerobic process of KW also makes it a better feedstock during high energy demand periods. The C/N ratio analysis showed that from the three waste streams the first stream containing vegetables (55%), fruits (16%), meat (25%) and pap (4%) yielded more methane-based biogas of 317mL/g of volatile solids (VS) at C/N of 21.06. Generally, this shows that a household will require a heterogeneous composition of nutrient-based waste to be fed into the digester to acquire the best biogas yield to sustain a households cooking needs.

Keywords: anaerobic digestion, biogas, kitchen waste, household

Procedia PDF Downloads 197
24816 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt

Authors: A. Anis, W. Bekheet, A. El Hakim

Abstract:

Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.

Keywords: road safety management system, road crash, road fatality, road injury

Procedia PDF Downloads 137
24815 Duration Patterns of English by Native British Speakers and Mandarin ESL Speakers

Authors: Chen Bingru

Abstract:

This study is intended to describe and analyze the effects of polysyllabic shortening and word or phrase boundary on the duration patterns of spoken utterances by Mandarin learners of English in comparison with native speakers of English. To investigate the relative contribution of these effects, two production experiments were conducted. The study included 11 native British English speakers and 20 Mandarin learners of English who were asked to produce four sets of tokens consisting of a mono-syllabic base form, disyllabic, and trisyllabic words derived from the base by the addition of suffixes, and a set of short sentences with a particular combination of phrase size, stress pattern, and boundary location. The duration of words and segments was measured, and results from the data analysis suggest that the amount of polysyllabic shortening and the effect of word or phrase position are likely to affect a Chinese accent for Mandarin ESL speakers. This study sheds light on research on the duration patterns of language by demonstrating the effect of duration-related factors on the foreign accent of Mandarin ESL speakers. It can also benefit both L2 learners and language teachers by increasing their sensitivity to the duration differences and difficulties experienced by L2 learners of English. An understanding of the amount of polysyllabic shortening and the effect of position in words and phrase on syllable duration can also facilitate L2 teachers to establish priorities for teaching pronunciation to ESL learners.

Keywords: duration patterns, Chinese accent, Mandarin ESL speakers, polysyllabic shortening

Procedia PDF Downloads 134
24814 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE

Authors: Oualid Walid Ben Ali

Abstract:

Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.

Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE

Procedia PDF Downloads 489
24813 Union-Primes and Immediate Neighbors

Authors: Shai Sarussi

Abstract:

The union of a nonempty chain of prime ideals in a noncommutative ring is not necessarily a prime ideal. An ideal is called union-prime if it is a union of a nonempty chain of prime ideals but is not a prime. In this paper, some relations between chains of prime ideals and the induced chains of union-prime ideals are shown; in particular, the cardinality of such chains and the cardinality of the sets of cuts of such chains are discussed. For a ring R and a nonempty full chain of prime ideals C of R, several characterizations for the property of immediate neighbors in C are given.

Keywords: prime ideals, union-prime ideals, immediate neighbors, Kaplansky's conjecture

Procedia PDF Downloads 129
24812 Factors Affecting Employee Decision Making in an AI Environment

Authors: Yogesh C. Sharma, A. Seetharaman

Abstract:

The decision-making process in humans is a complicated system influenced by a variety of intrinsic and extrinsic factors. Human decisions have a ripple effect on subsequent decisions. In this study, the scope of human decision making is limited to employees. In an organisation, a person makes a variety of decisions from the time they are hired to the time they retire. The goal of this research is to identify various elements that influence decision-making. In addition, the environment in which a decision is made is a significant aspect of the decision-making process. Employees in today's workplace use artificial intelligence (AI) systems for automation and decision augmentation. The impact of AI systems on the decision-making process is examined in this study. This research is designed based on a systematic literature review. Based on gaps in the literature, limitations and the scope of future research have been identified. Based on these findings, a research framework has been designed to identify various factors affecting employee decision making. Employee decision making is influenced by technological advancement, data-driven culture, human trust, decision automation-augmentation, and workplace motivation. Hybrid human-AI systems require the development of new skill sets and organisational design. Employee psychological safety and supportive leadership influences overall job satisfaction.

Keywords: employee decision making, artificial intelligence (AI) environment, human trust, technology innovation, psychological safety

Procedia PDF Downloads 105
24811 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 397