Search results for: data databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24902

Search results for: data databases

24782 Surface Geodesic Derivative Pattern for Deformable Textured 3D Object Comparison: Application to Expression and Pose Invariant 3D Face Recognition

Authors: Farshid Hajati, Soheila Gheisari, Ali Cheraghian, Yongsheng Gao

Abstract:

This paper presents a new Surface Geodesic Derivative Pattern (SGDP) for matching textured deformable 3D surfaces. SGDP encodes micro-pattern features based on local surface higher-order derivative variation. It extracts local information by encoding various distinctive textural relationships contained in a geodesic neighborhood, hence fusing texture and range information of a surface at the data level. Geodesic texture rings are encoded into local patterns for similarity measurement between non-rigid 3D surfaces. The performance of the proposed method is evaluated extensively on the Bosphorus and FRGC v2 face databases. Compared to existing benchmarks, experimental results show the effectiveness and superiority of combining the texture and 3D shape data at the earliest level in recognizing typical deformable faces under expression, illumination, and pose variations.

Keywords: 3D face recognition, pose, expression, surface matching, texture

Procedia PDF Downloads 371
24781 An Enhanced MEIT Approach for Itemset Mining Using Levelwise Pruning

Authors: Tanvi P. Patel, Warish D. Patel

Abstract:

Association rule mining forms the core of data mining and it is termed as one of the well-known methodologies of data mining. Objectives of mining is to find interesting correlations, frequent patterns, associations or casual structures among sets of items in the transaction databases or other data repositories. Hence, association rule mining is imperative to mine patterns and then generate rules from these obtained patterns. For efficient targeted query processing, finding frequent patterns and itemset mining, there is an efficient way to generate an itemset tree structure named Memory Efficient Itemset Tree. Memory efficient IT is efficient for storing itemsets, but takes more time as compare to traditional IT. The proposed strategy generates maximal frequent itemsets from memory efficient itemset tree by using levelwise pruning. For that firstly pre-pruning of items based on minimum support count is carried out followed by itemset tree reconstruction. By having maximal frequent itemsets, less number of patterns are generated as well as tree size is also reduced as compared to MEIT. Therefore, an enhanced approach of memory efficient IT proposed here, helps to optimize main memory overhead as well as reduce processing time.

Keywords: association rule mining, itemset mining, itemset tree, meit, maximal frequent pattern

Procedia PDF Downloads 356
24780 Systematic Review and Meta-Analysis of Mid-Term Survival, and Recurrent Mitral Regurgitation for Robotic-Assisted Mitral Valve Repair

Authors: Ramanen Sugunesegran, Michael L. Williams

Abstract:

Over the past two decades surgical approaches for mitral valve (MV) disease have evolved with the advent of minimally invasive techniques. Robotic mitral valve repair (RMVr) safety and efficacy has been well documented, however, mid- to long-term data are limited. The aim of this review was to provide a comprehensive analysis of the available mid- to long-term term data for RMVr. Electronic searches of five databases were performed to identify all relevant studies reporting minimum 5-year data on RMVr. Pre-defined primary outcomes of interest were overall survival, freedom from MV reoperation and freedom from moderate or worse mitral regurgitation (MR) at 5-years or more post-RMVr. A meta-analysis of proportions or means was performed, utilizing a random effects model, to present the data. Kaplan-Meier curves were aggregated using reconstructed individual patient data. Nine studies totaling 3,300 patients undergoing RMVr were identified. Rates of overall survival at 1-, 5- and 10-years were 99.2%, 97.4% and 92.3%, respectively. Freedom from MV reoperation at 8-years post RMVr was 95.0%. Freedom from moderate or worse MR at 7-years was 86.0%. Rates of early post-operative complications were low with only 0.2% all-cause mortality and 1.0% cerebrovascular accident. Reoperation for bleeding was low at 2.2% and successful RMVr was 99.8%. Mean intensive care unit and hospital stay were 22.4 hours and 5.2 days, respectively. RMVr is a safe procedure with low rates of early mortality and other complications. It can be performed with low complication rates in high volume, experienced centers. Evaluation of available mid-term data post-RMVr suggests favorable rates of overall survival, freedom from MV reoperation and freedom from moderate or worse MR recurrence.

Keywords: mitral valve disease, mitral valve repair, robotic cardiac surgery, robotic mitral valve repair

Procedia PDF Downloads 73
24779 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 252
24778 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 63
24777 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 269
24776 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 97
24775 The Development of Chinese-English Homophonic Word Pairs Databases for English Teaching and Learning

Authors: Yuh-Jen Wu, Chun-Min Lin

Abstract:

Homophonic words are common in Mandarin Chinese which belongs to the tonal language family. Using homophonic cues to study foreign languages is one of the learning techniques of mnemonics that can aid the retention and retrieval of information in the human memory. When learning difficult foreign words, some learners transpose them with words in a language they are familiar with to build an association and strengthen working memory. These phonological clues are beneficial means for novice language learners. In the classroom, if mnemonic skills are used at the appropriate time in the instructional sequence, it may achieve their maximum effectiveness. For Chinese-speaking students, proper use of Chinese-English homophonic word pairs may help them learn difficult vocabulary. In this study, a database program is developed by employing Visual Basic. The database contains two corpora, one with Chinese lexical items and the other with English ones. The Chinese corpus contains 59,053 Chinese words that were collected by a web crawler. The pronunciations of this group of words are compared with words in an English corpus based on WordNet, a lexical database for the English language. Words in both databases with similar pronunciation chunks and batches are detected. A total of approximately 1,000 Chinese lexical items are located in the preliminary comparison. These homophonic word pairs can serve as a valuable tool to assist Chinese-speaking students in learning and memorizing new English vocabulary.

Keywords: Chinese, corpus, English, homophonic words, vocabulary

Procedia PDF Downloads 161
24774 Counterfeit Product Detection Using Block Chain

Authors: Sharanya C. H., Pragathi M., Vathsala R. S., Theja K. V., Yashaswini S.

Abstract:

Identifying counterfeit products have become increasingly important in the product manufacturing industries in recent decades. This current ongoing product issue of counterfeiting has an impact on company sales and profits. To address the aforementioned issue, a functional blockchain technology was implemented, which effectively prevents the product from being counterfeited. By utilizing the blockchain technology, consumers are no longer required to rely on third parties to determine the authenticity of the product being purchased. Blockchain is a distributed database that stores data records known as blocks and several databases known as chains across various networks. Counterfeit products are identified using a QR code reader, and the product's QR code is linked to the blockchain management system. It compares the unique code obtained from the customer to the stored unique code to determine whether or not the product is original.

Keywords: blockchain, ethereum, QR code

Procedia PDF Downloads 158
24773 Processing Big Data: An Approach Using Feature Selection

Authors: Nikat Parveen, M. Ananthi

Abstract:

Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.

Keywords: big data, key value, feature selection, retrieval, performance

Procedia PDF Downloads 329
24772 Prioritizing the Factors Effective on Decreasing the Rate of Accidents on Freeways in Iran between 2013-2015

Authors: Mansour Hadji Hosseinlou, Alireza Mahdavi

Abstract:

Transportation is one of any society's needs which have developed after improving economically and socially and is one of civilization symbols today. Although it is so useful for human, it leads to many serious harms and injuries. The development of communication system and building new roads has resulted in increasing the rate of accidents; therefore, in practice, this increasing rate has decreased the advantages of transportation. Traffic accidents are one of the causes of death, serious financial and bodily harms and its significant social, economic and cultural consequences threatens the societies seriously. Iran's ground transportation system is one of the most eventful transportation systems in the world and mortality rate and financial harms cost too much for the country in national aspect. Therefore, we have presented a data collection by referring to recorded statistics of the accidents occurred in freeways from 2013 to 2015. These statistics are recorded in different related databases, generally police and road transportation system. The data is separated and arranged in tables and after preparing, processing and prioritizing the factors, the achieved collection is presented to the departments, managers and researchers to help them suggest practical solutions.

Keywords: freeways’ accidents, humane causes, death, tiredness, drowsiness

Procedia PDF Downloads 182
24771 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 266
24770 Climate Change and Health: Scoping Review of Scientific Literature 1990-2015

Authors: Niamh Herlihy, Helen Fischer, Rainer Sauerborn, Anneliese Depoux, Avner Bar-Hen, Antoine Flauhault, Stefanie Schütte

Abstract:

In the recent decades, there has been an increase in the number of publications both in the scientific and grey literature on the potential health risks associated with climate change. Though interest in climate change and health is growing, there are still many gaps to adequately assess our future health needs in a warmer world. Generating a greater understanding of the health impacts of climate change could be a key step in inciting the changes necessary to decelerate global warming and to target new strategies to mitigate the consequences on health systems. A long term and broad overview of existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. We conducted a scoping review of published peer-reviewed literature on climate change and health from two large databases, PubMed and Web of Science, between 1990 and 2015. A scoping review allowed for a broad analysis of this complex topic on a meta-level as opposed to a thematically refined literature review. A detailed search strategy including specific climate and health terminology was used to search the two databases. Inclusion and exclusion criteria were applied in order to capture the most relevant literature on the human health impact of climate change within the chosen timeframe. Two reviewers screened the papers independently and any differences arising were resolved by a third party. Data was extracted, categorized and coded both manually and using R software. Analytics and infographics were developed from results. There were 7269 articles identified between the two databases following the removal of duplicates. After screening of the articles by both reviewers 3751 were included. As expected, preliminary results indicate that the number of publications on the topic has increased over time. Geographically, the majority of publications address the impact of climate change and health in Europe and North America, This is particularly alarming given that countries in the Global South will bear the greatest health burden. Concerning health outcomes, infectious diseases, particularly dengue fever and other mosquito transmitted infections are the most frequently cited. We highlight research gaps in certain areas e.g climate migration and mental health issues. We are developing a database of the identified climate change and health publications and are compiling a report for publication and dissemination of the findings. As health is a major co-beneficiary to climate change mitigation strategies, our results may serve as a useful source of information for research funders and investors when considering future research needs as well as the cost-effectiveness of climate change strategies. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.

Keywords: climate change, health, review, mapping

Procedia PDF Downloads 302
24769 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 168
24768 Gis Database Creation for Impacts of Domestic Wastewater Disposal on BIDA Town, Niger State Nigeria

Authors: Ejiobih Hyginus Chidozie

Abstract:

Geographic Information System (GIS) is a configuration of computer hardware and software specifically designed to effectively capture, store, update, manipulate, analyse and display and display all forms of spatially referenced information. GIS database is referred to as the heart of GIS. It has location data, attribute data and spatial relationship between the objects and their attributes. Sewage and wastewater management have assumed increased importance lately as a result of general concern expressed worldwide about the problems of pollution of the environment contamination of the atmosphere, rivers, lakes, oceans and ground water. In this research GIS database was created to study the impacts of domestic wastewater disposal methods on Bida town, Niger State as a model for investigating similar impacts on other cities in Nigeria. Results from GIS database are very useful to decision makers and researchers. Bida Town was subdivided into four regions, eight zones, and 24 sectors based on the prevailing natural morphology of the town. GIS receiver and structured questionnaire were used to collect information and attribute data from 240 households of the study area. Domestic wastewater samples were collected from twenty four sectors of the study area for laboratory analysis. ArcView 3.2a GIS software, was used to create the GIS databases for ecological, health and socioeconomic impacts of domestic wastewater disposal methods in Bida town.

Keywords: environment, GIS, pollution, software, wastewater

Procedia PDF Downloads 408
24767 Forecasting Unusual Infection of Patient Used by Irregular Weighted Point Set

Authors: Seema Vaidya

Abstract:

Mining association rule is a key issue in data mining. In any case, the standard models ignore the distinction among the exchanges, and the weighted association rule mining does not transform on databases with just binary attributes. This paper proposes a novel continuous example and executes a tree (FP-tree) structure, which is an increased prefix-tree structure for securing compacted, discriminating data about examples, and makes a fit FP-tree-based mining system, FP enhanced capacity algorithm is used, for mining the complete game plan of examples by illustration incessant development. Here, this paper handles the motivation behind making remarkable and weighted item sets, i.e. rare weighted item set mining issue. The two novel brightness measures are proposed for figuring the infrequent weighted item set mining issue. Also, the algorithm are handled which perform IWI which is more insignificant IWI mining. Moreover we utilized the rare item set for choice based structure. The general issue of the start of reliable definite rules is troublesome for the grounds that hypothetically no inciting technique with no other person can promise the rightness of influenced theories. In this way, this framework expects the disorder with the uncommon signs. Usage study demonstrates that proposed algorithm upgrades the structure which is successful and versatile for mining both long and short diagnostics rules. Structure upgrades aftereffects of foreseeing rare diseases of patient.

Keywords: association rule, data mining, IWI mining, infrequent item set, frequent pattern growth

Procedia PDF Downloads 388
24766 Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm

Authors: Tusar Kanti Dash, Ganapati Panda

Abstract:

The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models.

Keywords: non-Intrusive speech evaluation, S-transform, light GBM, speech quality, and intelligibility

Procedia PDF Downloads 248
24765 Microbial Dark Matter Analysis Using 16S rRNA Gene Metagenomics Sequences

Authors: Hana Barak, Alex Sivan, Ariel Kushmaro

Abstract:

Microorganisms are the most diverse and abundant life forms on Earth and account for a large portion of the Earth’s biomass and biodiversity. To date though, our knowledge regarding microbial life is lacking, as it is based mainly on information from cultivated organisms. Indeed, microbiologists have borrowed from astrophysics and termed the ‘uncultured microbial majority’ as ‘microbial dark matter’. The realization of how diverse and unexplored microorganisms are, actually stems from recent advances in molecular biology, and in particular from novel methods for sequencing microbial small subunit ribosomal RNA genes directly from environmental samples termed next-generation sequencing (NGS). This has led us to use NGS that generates several gigabases of sequencing data in a single experimental run, to identify and classify environmental samples of microorganisms. In metagenomics sequencing analysis (both 16S and shotgun), sequences are compared to reference databases that contain only small part of the existing microorganisms and therefore their taxonomy assignment may reveal groups of unknown microorganisms or origins. These unknowns, or the ‘microbial sequences dark matter’, are usually ignored in spite of their great importance. The goal of this work was to develop an improved bioinformatics method that enables more complete analyses of the microbial communities in numerous environments. Therefore, NGS was used to identify previously unknown microorganisms from three different environments (industrials wastewater, Negev Desert’s rocks and water wells at the Arava valley). 16S rRNA gene metagenome analysis of the microorganisms from those three environments produce about ~4 million reads for 75 samples. Between 0.1-12% of the sequences in each sample were tagged as ‘Unassigned’. Employing relatively simple methodology for resequencing of original gDNA samples through Sanger or MiSeq Illumina with specific primers, this study demonstrates that the mysterious ‘Unassigned’ group apparently contains sequences of candidate phyla. Those unknown sequences can be located on a phylogenetic tree and thus provide a better understanding of the ‘sequences dark matter’ and its role in the research of microbial communities and diversity. Studying this ‘dark matter’ will extend the existing databases and could reveal the hidden potential of the ‘microbial dark matter’.

Keywords: bacteria, bioinformatics, dark matter, Next Generation Sequencing, unknown

Procedia PDF Downloads 236
24764 The Impact of Globalization on the Development of Israel Advanced Changes

Authors: Erez Cohen

Abstract:

The study examines the socioeconomic impact of development of an advanced industry in Israel. The research method is based on data collected from the Israel Central Bureau of Statistics and from the National Insurance Institute (NII) databases, which provided information that allows to examine the Economic and Social Changes during the 1990s. The study examined the socioeconomic effects of the development of advanced industry in Israel. The research findings indicate that as a result of globalization processes, the weight of traditional industry began to diminish as a result of factory closures and the laying off of workers. These circumstances led to growing unemployment among the weaker groups in Israeli society, detracting from their income and thus increasing inequality among different socioeconomic groups in Israel and enhancement of social disparities.

Keywords: globalization, Israeli advanced industry, public policy, socio-economic indicators

Procedia PDF Downloads 150
24763 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model

Procedia PDF Downloads 417
24762 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia

Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas

Abstract:

The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.

Keywords: time series, global solar irradiance, imputed data, energy complementarity

Procedia PDF Downloads 55
24761 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 145
24760 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: big data, learning analytics, analytics, big data in education, Hadoop

Procedia PDF Downloads 393
24759 Peripheral Nerves Cross-Sectional Area for the Diagnosis of Diabetic Polyneuropathy: A Meta-Analysis of Ultrasonographic Measurements

Authors: Saeed Pourhassan, Nastaran Maghbouli

Abstract:

1) Background It has been hypothesized that, in individuals with diabetes mellitus, the peripheral nerve is swollen due to sorbitol over-accumulation. Additionally growing evidence supported electro diagnostic study of diabetes induced neuropathy as a method having some challenges. 2) Objective To examine the performance of sonographic cross-sectional area (CSA) measurements in the diagnosis of diabetic polyneuropathy (DPN). 3) Data Sources Electronic databases, comprising PubMed and EMBASE and Google scholar, were searched for the appropriate studies before Jan 1, 2020. 4) Study Selection Eleven trials comparing different peripheral nerve CSA measurements between participants with and without DPN were included. 5) Data Extraction Study design, participants' demographic characteristics, diagnostic reference of DPN, and evaluated peripheral nerves and methods of CSA measurement. 6) Data Synthesis Among different peripheral nerves, Tibial nerve diagnostic odds ratios pooled from five studies (713 participants) were 4.46 (95% CI, 0.35–8.57) and the largest one with P<0.0001, I²:64%. Median nerve CSA at wrist and mid-arm took second and third place with ORs= 2.82 (1.50-4.15), 2.02(0.26-3.77) respectively. The sensitivities and specificities pooled from two studies for Sural nerve were 0.78 (95% CI, 0.68–0.89), and 0.68 (95% CI, 0.53–0.74). Included studies for other nerves were limited to one study. The largest sensitivity was for Sural nerve and the largest specificity was for Tibial nerve. 7) Conclusions The peripheral nerves CSA measured by ultrasound imaging is useful for the diagnosis of DPN and is most significantly different between patients and participants without DPN at the Tibial nerve. Because the Tibial nerve CSA in healthy participants, at various locations, rarely exceeds 24 mm2, this value can be considered as a cutoff point for diagnosing DPN.

Keywords: diabetes, diagnosis, polyneuropathy, ultrasound

Procedia PDF Downloads 120
24758 The Implementation of Sexual and Reproductive Health Education Policy in Schools in Asia and Africa: A Scoping Review

Authors: Rhea Khosla, Victoria Tzortziou-Brown

Abstract:

Introduction: Adolescent SRH has been neglected since the start of the millennium. Adolescents comprise 16% of the global population, with the largest proportion living in Asia (650 million). By late adolescence, individuals in these regions are likely to become sexually active, and thus they must understand their SRH rights. Many lack knowledge of SRH, using unreliable sources for such information. Sex education is necessary to standardize and inform sexual knowledge, which empowers adolescents to make informed SRH decisions. School is an appropriate environment for this, however, SRH education requires effective policy to enforce. Nonetheless, this issue remains of low political priority in Asia and Africa. Current literature on sex education policy in schools in these regions is scarce and tends to have broad aims. Thus, a scoping review was necessary. Methods: Literature searches were conducted in February 2023 using six databases, including grey literature databases (PubMed, Scopus, Embase, Web of Science, Google Scholar, Global Index Medicus), returning a total of 1537 unique articles. After screening titles, abstracts and full text, 17 articles remained. References of included articles were additionally searched, producing a further 7 articles, which then underwent thematic analysis Results: Most countries in Africa and Asia did not have studies on this topic. Studies derived data from interviews with key stakeholders and quantitative methods quantified questionnaire responses. Barriers were: policy/curriculum issues, societal opinions, teaching discomfort, and lack of educator training. Limitations were insufficient timing, inconsistent implementation, insufficient hours dedicated to teaching, education received late into schooling, and discrepancies between teachers, schools, and students about whether policies were being implemented. Discussion: Based on the existing limited evidence, a cultural shift to reduce stigma seems necessary, alongside teacher and student involvement in policy formulation with effective implementation monitoring and educator training.

Keywords: adolescent, Africa, Asia, education, sexual and reproductive health, policy

Procedia PDF Downloads 37
24757 The Use Management of the Knowledge Management and the Information Technologies in the Competitive Strategy of a Self-Propelling Industry

Authors: Guerrero Ramírez Sandra, Ramos Salinas Norma Maricela, Muriel Amezcua Vanesa

Abstract:

This article presents the beginning of a wider study that intends to demonstrate how within organizations of the automotive industry from the city of Querétaro. Knowledge management and technological management are required, as well as people’s initiative and the interaction embedded at the interior of it, with the appropriate environment that facilitates information conversion with wide information technologies management (ITM) range. A company was identified for the pilot study of this research, where descriptive and inferential research information was obtained. The results of the pilot suggest that some respondents did noted entity the knowledge management topic, even if staffs have access to information technology (IT) that serve to enhance access to knowledge (through internet, email, databases, external and internal company personnel, suppliers, customers and competitors) data, this implicates that there are Knowledge Management (KM) problems. The data shows that academically well-prepared organizations normally do not recognize the importance of knowledge in the business, nor in the implementation of it, which at the end is a great influence on how to manage it, so that it should guide the company to greater in sight towards a competitive strategy search, given that the company has an excellent technological infrastructure and KM was not exploited. Cultural diversity is another factor that was observed by the staff.

Keywords: Knowledge Management (KM), Technological Knowledge Management (TKM), Technology Information Management (TI), access to knowledge

Procedia PDF Downloads 484
24756 Association between Anemia and Maternal Depression during Pregnancy: Systematic Review

Authors: Gebeyaw Molla Wondim, Damen Haile Mariam, Wubegzier Mekonnen, Catherine Arsenault

Abstract:

Introduction: Maternal depression is a common psychological disorder that mostly occurs during pregnancy and after childbirth. It affects approximately one in four women worldwide. There is inconsistent evidence regarding the association between anemia and maternal depression. The objective of this systematic review was to examine the association between anemia and depression during pregnancy. Method: A comprehensive search of articles published before March 8, 2024, was conducted in seven databases such as PubMed, Scopus, Web of Science, PsycINFO, CINAHL, Cochrane Library, and Google Scholar. The Boolean operators “AND” or “OR” and “NOT” were used to connect the MeSH terms and keywords. Rayyan software was used to screen articles for final retrieval, and the PRISMA diagram was used to show the article selection process. Data extraction and risk bias assessment were done by two reviewers independently. JBI critical appraisal tool was used to assess the methodological quality of the retrieved articles. Heterogenicity was assessed through visual inspection of the extracted result, and narrative analysis was used to synthesize the result. Result: A total of 2,413 articles were obtained from seven electronic databases. Among these articles, a total of 2,398 were removed due to duplication (702 articles), by title and abstract selection criteria (1,678 articles), and by full-text review (18 articles). Finally, in this systematic review, 15 articles with a total of 628,781 pregnant women were included: seven articles were cohort studies, two were case-control, and six studies were cross-sectional. All included studies were published between 2013 and 2022. Studies conducted in the United States, South Korea, Finland, and one in South India found no significant association between anemia and maternal depression during pregnancy. On the other hand, studies conducted in Australia, Canada, Finland, Israel, Turkey, Vietnam, Ethiopia, and South India showed a significant association between anemia and depression during pregnancy. Conclusion: The overall finding of the systematic review shows the burden of anemia and antenatal depression is much higher among pregnant women in developing countries. Around three-fourths of the studies show that anemia is positively associated with antenatal depression. Almost all studies conducted in LMICs show anemia positively associated with antenatal depression.

Keywords: pregnant, women, anemia, depression

Procedia PDF Downloads 15
24755 Birth Path and the Vitality of Caring Models in the Continuity of Midwifery

Authors: Elnaz Lalezari, Ramin Ghasemi Shaya

Abstract:

The birth way is influenced by a fracture within the quiet care handle, making a brokenness of this final one. The pregnant lady has got to interface with numerous experts, both amid the pregnancy, the childbirth, and the puerperium. Be that as it may, amid the final ten a long time, there has been an expanding of the pregnancy care worked by the midwife, who is considered to be the administrator with the correct competences, who can beware of each pregnancy and may profit herself of other professionals' commitments in arrange to make strides the results of maternal and neonatal health. To confirm whether there are proofs of viability that bolster the caseload birthing assistance care show, and in case it is conceivable to apply this show within the birth way in Italy. A amendment of writing has been done utilizing a few look motor (Google, Bing) and particular databases (MEDLINE, CINAHL, Embase, Domestic - ClinicalTrials.gov). There has, too, been a discussion of the Italian directions, the national rules, and the proposals of WHO. Results: The look string, legitimately adjusted to the three databases, has given the taking after comes about: MEDLINE 64 articles, CINAHL 94 articles, Embase 88 articles. From this choice, 14 articles have been extricated: 1 orderly survey, 3 controlled arbitrary trial, 7 observational ponders, 3 subjective studies. The caseload maternity care appears to be an successful and dependable organisational/caring strategy. It reacts to the criterions of quality and security, to the requirements of ladies not as it were amid the pregnancy but moreover amid the post-partum stage. For these reasons, it appears exceptionally valuable also for the birth way within the Italian reality.

Keywords: midwifery, care, caseload, maternity

Procedia PDF Downloads 120
24754 Virtual Dimension Analysis of Hyperspectral Imaging to Characterize a Mining Sample

Authors: L. Chevez, A. Apaza, J. Rodriguez, R. Puga, H. Loro, Juan Z. Davalos

Abstract:

Virtual Dimension (VD) procedure is used to analyze Hyperspectral Image (HIS) treatment-data in order to estimate the abundance of mineral components of a mining sample. Hyperspectral images coming from reflectance spectra (NIR region) are pre-treated using Standard Normal Variance (SNV) and Minimum Noise Fraction (MNF) methodologies. The endmember components are identified by the Simplex Growing Algorithm (SVG) and after adjusted to the reflectance spectra of reference-databases using Simulated Annealing (SA) methodology. The obtained abundance of minerals of the sample studied is very near to the ones obtained using XRD with a total relative error of 2%.

Keywords: hyperspectral imaging, minimum noise fraction, MNF, simplex growing algorithm, SGA, standard normal variance, SNV, virtual dimension, XRD

Procedia PDF Downloads 148
24753 Applications Using Geographic Information System for Planning and Development of Energy Efficient and Sustainable Living for Smart-Cities

Authors: Javed Mohammed

Abstract:

As urbanization process has been and will be happening in an unprecedented scale worldwide, strong requirements from academic research and practical fields for smart management and intelligent planning of cities are pressing to handle increasing demands of infrastructure and potential risks of inhabitants agglomeration in disaster management. Geo-spatial data and Geographic Information System (GIS) are essential components for building smart cities in a basic way that maps the physical world into virtual environment as a referencing framework. On higher level, GIS has been becoming very important in smart cities on different sectors. In the digital city era, digital maps and geospatial databases have long been integrated in workflows in land management, urban planning and transportation in government. People have anticipated GIS to be more powerful not only as an archival and data management tool but also as spatial models for supporting decision-making in intelligent cities. The purpose of this project is to offer observations and analysis based on a detailed discussion of Geographic Information Systems( GIS) driven Framework towards the development of Smart and Sustainable Cities through high penetration of Renewable Energy Technologies.

Keywords: digital maps, geo-spatial, geographic information system, smart cities, renewable energy, urban planning

Procedia PDF Downloads 515