Search results for: NoSQL databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 808

Search results for: NoSQL databases

778 Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation

Authors: Bubai Maji, Monorama Swain

Abstract:

Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases.

Keywords: Bi-GRU, 1D-CNNs, self-attention, speech emotion recognition

Procedia PDF Downloads 92
777 Applying Spanning Tree Graph Theory for Automatic Database Normalization

Authors: Chetneti Srisa-an

Abstract:

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Keywords: relational database, functional dependency, automatic normalization, primary key, spanning tree

Procedia PDF Downloads 330
776 BiLex-Kids: A Bilingual Word Database for Children 5-13 Years Old

Authors: Aris R. Terzopoulos, Georgia Z. Niolaki, Lynne G. Duncan, Mark A. J. Wilson, Antonios Kyparissiadis, Jackie Masterson

Abstract:

As word databases for bilingual children are not available, researchers, educators and textbook writers must rely on monolingual databases. The aim of this study is thus to develop a bilingual word database, BiLex-kids, an online open access developmental word database for 5-13 year old bilingual children who learn Greek as a second language and have English as their dominant one. BiLex-kids is compiled from 120 Greek textbooks used in Greek-English bilingual education in the UK, USA and Australia, and provides word translations in the two languages, pronunciations in Greek, and psycholinguistic variables (e.g. Zipf, Frequency per million, Dispersion, Contextual Diversity, Neighbourhood size). After clearing the textbooks of non-relevant items (e.g. punctuation), algorithms were applied to extract the psycholinguistic indices for all words. As well as one total lexicon, the database produces values for all ages (one lexicon for each age) and for three age bands (one lexicon per age band: 5-8, 9-11, 12-13 years). BiLex-kids provides researchers with accurate figures for a wide range of psycholinguistic variables, making it a useful and reliable research tool for selecting stimuli to examine lexical processing among bilingual children. In addition, it offers children the opportunity to study word spelling, learn translations and listen to pronunciations in their second language. It further benefits educators in selecting age-appropriate words for teaching reading and spelling, while special educational needs teachers will have a resource to control the content of word lists when designing interventions for bilinguals with literacy difficulties.

Keywords: bilingual children, psycholinguistics, vocabulary development, word databases

Procedia PDF Downloads 289
775 Algorithm for Information Retrieval Optimization

Authors: Kehinde K. Agbele, Kehinde Daniel Aruleba, Eniafe F. Ayetiran

Abstract:

When using Information Retrieval Systems (IRS), users often present search queries made of ad-hoc keywords. It is then up to the IRS to obtain a precise representation of the user’s information need and the context of the information. This paper investigates optimization of IRS to individual information needs in order of relevance. The study addressed development of algorithms that optimize the ranking of documents retrieved from IRS. This study discusses and describes a Document Ranking Optimization (DROPT) algorithm for information retrieval (IR) in an Internet-based or designated databases environment. Conversely, as the volume of information available online and in designated databases is growing continuously, ranking algorithms can play a major role in the context of search results. In this paper, a DROPT technique for documents retrieved from a corpus is developed with respect to document index keywords and the query vectors. This is based on calculating the weight (

Keywords: information retrieval, document relevance, performance measures, personalization

Procedia PDF Downloads 214
774 Dynamic Environmental Impact Study during the Construction of the French Nuclear Power Plants

Authors: A. Er-Raki, D. Hartmann, J. P. Belaud, S. Negny

Abstract:

This paper has a double purpose: firstly, a literature review of the life cycle analysis (LCA) and secondly a comparison between conventional (static) LCA and multi-level dynamic LCA on the following items: (i) inventories evolution with time (ii) temporal evolution of the databases. The first part of the paper summarizes the state of the art of the static LCA approach. The different static LCA limits have been identified and especially the non-consideration of the spatial and temporal evolution in the inventory, for the characterization factors (FCs) and into the databases. Then a description of the different levels of integration of the notion of temporality in life cycle analysis studies was made. In the second part, the dynamic inventory has been evaluated firstly for a single nuclear plant and secondly for the entire French nuclear power fleet by taking into account the construction durations of all the plants. In addition, the databases have been adapted by integrating the temporal variability of the French energy mix. Several iterations were used to converge towards the real environmental impact of the energy mix. Another adaptation of the databases to take into account the temporal evolution of the market data of the raw material was made. An identification of the energy mix of the time studied was based on an extrapolation of the production reference values of each means of production. An application to the construction of the French nuclear power plants from 1971 to 2000 has been performed, in which a dynamic inventory of raw material has been evaluated. Then the impacts were characterized by the ILCD 2011 characterization method. In order to compare with a purely static approach, a static impact assessment was made with the V 3.4 Ecoinvent data sheets without adaptation and a static inventory considering that all the power stations would have been built at the same time. Finally, a comparison between static and dynamic LCA approaches was set up to determine the gap between them for each of the two levels of integration. The results were analyzed to identify the contribution of the evolving nuclear power fleet construction to the total environmental impacts of the French energy mix during the same period. An equivalent strategy using a dynamic approach will further be applied to identify the environmental impacts that different scenarios of the energy transition could bring, allowing to choose the best energy mix from an environmental viewpoint.

Keywords: LCA, static, dynamic, inventory, construction, nuclear energy, energy mix, energy transition

Procedia PDF Downloads 82
773 Anomaly Detection with ANN and SVM for Telemedicine Networks

Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos

Abstract:

In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.

Keywords: anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines

Procedia PDF Downloads 321
772 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition

Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni

Abstract:

Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.

Keywords: BEMD, breast density, contend-based, image retrieval, mammography

Procedia PDF Downloads 210
771 Analyzing Medical Workflows Using Market Basket Analysis

Authors: Mohit Kumar, Mayur Betharia

Abstract:

Healthcare domain, with the emergence of Electronic Medical Record (EMR), collects a lot of data which have been attracting Data Mining expert’s interest. In the past, doctors have relied on their intuition while making critical clinical decisions. This paper presents the means to analyze the Medical workflows to get business insights out of huge dumped medical databases. Market Basket Analysis (MBA) which is a special data mining technique, has been widely used in marketing and e-commerce field to discover the association between products bought together by customers. It helps businesses in increasing their sales by analyzing the purchasing behavior of customers and pitching the right customer with the right product. This paper is an attempt to demonstrate Market Basket Analysis applications in healthcare. In particular, it discusses the Market Basket Analysis Algorithm ‘Apriori’ applications within healthcare in major areas such as analyzing the workflow of diagnostic procedures, Up-selling and Cross-selling of Healthcare Systems, designing healthcare systems more user-friendly. In the paper, we have demonstrated the MBA applications using Angiography Systems, but can be extrapolated to other modalities as well.

Keywords: data mining, market basket analysis, healthcare applications, knowledge discovery in healthcare databases, customer relationship management, healthcare systems

Procedia PDF Downloads 140
770 The Role of Cyfra 21-1 in Diagnosing Non Small Cell Lung Cancer (NSCLC)

Authors: H. J. T. Kevin Mozes, Dyah Purnamasari

Abstract:

Background: Lung cancer accounted for the fourth most common cancer in Indonesia. 85% of all lung cancer cases are the Non-Small Cell Lung Cancer (NSCLC). The indistinct signs and symptoms of NSCLC sometimes lead to misdiagnosis. The gold standard assessment for the diagnosis of NSCLC is the histopathological biopsy, which is invasive. Cyfra 21-1 is a tumor marker, which can be found in the intermediate protein structure in the epitel. The accuracy of Cyfra 21-1 in diagnosing NSCLC is not yet known, so this report is made to seek the answer for the question above. Methods: Literature searching is done using online databases. Proquest and Pubmed are online databases being used in this report. Then, literature selection is done by excluding and including based on inclusion criterias and exclusion criterias. The selected literature is then being appraised using the criteria of validity, importance, and validity. Results: From six journals appraised, five of them are valid. Sensitivity value acquired from all five literature is ranging from 50-84.5 %, meanwhile the specificity is 87.8 %-94.4 %. Likelihood the ratio of all appraised literature is ranging from 5.09 -10.54, which categorized to Intermediate High. Conclusion: Serum Cyfra 21-1 is a sensitive and very specific tumor marker for diagnosis of non-small cell lung cancer (NSCLC).

Keywords: cyfra 21-1, diagnosis, nonsmall cell lung cancer, NSCLC, tumor marker

Procedia PDF Downloads 210
769 Formulating Rough Approximations in Information Tables with Possibilistic Information

Authors: Michinori Nakata, Hiroshi Sakai

Abstract:

A rough set, which consists of lower and upper approximations, is formulated in information tables containing possibilistic information. First, lower and upper approximations on the basis of possible world semantics in the same way as Lipski did in the field of incomplete databases are shown in order to clarify fundamentals of rough sets under possibilistic information. Possibility and necessity measures are used, as is done in possibilistic databases. As a result, each object has certain and possible membership degrees to lower and upper approximations, which degrees are the lower and upper bounds. Therefore, the degree that the object belongs to lower and upper approximations is expressed by an interval value. And the complementary property linked with the lower and upper approximations holds, as is valid under complete information. Second, the approach based on indiscernibility relations, which is proposed by Dubois and Prade, are extended in three cases. The first case is that objects used to approximate a set of objects are characterized by possibilistic information. The second case is that objects used to approximate a set of objects with possibilistic information are characterized by complete information. The third case is that objects that are characterized by possibilistic information approximate a set of objects with possibilistic information. The extended approach create the same results as the approach based on possible world semantics. This justifies our extension.

Keywords: rough sets, possibilistic information, possible world semantics, indiscernibility relations, lower approximations, upper approximations

Procedia PDF Downloads 297
768 An Optimized Association Rule Mining Algorithm

Authors: Archana Singh, Jyoti Agarwal, Ajay Rana

Abstract:

Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.

Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph

Procedia PDF Downloads 391
767 An Informetrics Analysis of Research on Phishing in Scopus and Web of Science Databases from 2012 to 2021

Authors: Nkosingiphile Mbusozayo Zungu

Abstract:

The purpose of the current study is to adopt informetrics methods to analyse the research on phishing from 2012 to 2021 in three selected databases in order to contribute to global cybersecurity through impactful research. The study follows a quantitative research methodology. We opted for the positivist epistemology and objectivist ontology. The analysis focuses on: (i) the productivity of individual authors, institutions, and countries; (ii) the research contributions, using co-authorship as a measure of collaboration; (iii) the altmetrics of selected research contributions; (iv) the citation patterns and research impact of research on phishing; and (v) research contributions by keywords, to discover the concepts that are related to phishing. The preliminary findings favour developed countries in terms of quantity and quality of research in the domain. There are unique research trends and patterns in the developing countries, including those in Africa, that provide opportunities for research development in the domain in the region. This study explores an important research domain by using unexplored method in the region. The study supports the SDG Agenda 2030, such as ending abuse, exploitation, trafficking, and all other forms of violence and torture of children through the use of cyberspace (SDG 16). Further, the results from this study can inform research, teaching, and learning largely in Africa. Invariably, the study contributes to cybersecurity awareness that will mitigate cybersecurity threats against vulnerable communities.

Keywords: phishing, cybersecurity, informetrics, information security

Procedia PDF Downloads 86
766 Emerging Research Trends in Routing Protocol for Wireless Sensor Network

Authors: Subhra Prosun Paul, Shruti Aggarwal

Abstract:

Now a days Routing Protocol in Wireless Sensor Network has become a promising technique in the different fields of the latest computer technology. Routing in Wireless Sensor Network is a demanding task due to the different design issues of all sensor nodes. Network architecture, no of nodes, traffic of routing, the capacity of each sensor node, network consistency, service value are the important factor for the design and analysis of Routing Protocol in Wireless Sensor Network. Additionally, internal energy, the distance between nodes, the load of sensor nodes play a significant role in the efficient routing protocol. In this paper, our intention is to analyze the research trends in different routing protocols of Wireless Sensor Network in terms of different parameters. In order to explain the research trends on Routing Protocol in Wireless Sensor Network, different data related to this research topic are analyzed with the help of Web of Science and Scopus databases. The data analysis is performed from global perspective-taking different parameters like author, source, document, country, organization, keyword, year, and a number of the publication. Different types of experiments are also performed, which help us to evaluate the recent research tendency in the Routing Protocol of Wireless Sensor Network. In order to do this, we have used Web of Science and Scopus databases separately for data analysis. We have observed that there has been a tremendous development of research on this topic in the last few years as it has become a very popular topic day by day.

Keywords: analysis, routing protocol, research trends, wireless sensor network

Procedia PDF Downloads 191
765 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet

Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia

Abstract:

Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical universal sync middle of low maintenance and operation costs is most wanted, but developing such a product and adapting it for various scenarios are a very sophisticated and continuous practice. The authors have been devising, applying, and optimizing a generic sync middleware system, named GSMS since 2006, holding the principles or advantages that the middleware must be SyncML-compliant and transparent to data application layer logic, need not refer to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence, of low cost. A series of ultimate experiments with GSMS sync performance were conducted for a persuasive example of a source relational database that underwent a broad range of write loads, say, from one thousand to one million intensive writes within a few minutes. The tests proved that GSMS has achieved an instant sync level of well below a fraction of millisecond per record sync, and GSMS’ smooth performances under ultimate write loads also showed it is feasible and competent.

Keywords: heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization

Procedia PDF Downloads 100
764 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: critical path, transportation network, connectivity reliability, network model, Neo4j application, edge betweenness centrality index

Procedia PDF Downloads 103
763 An Adaptive Distributed Incremental Association Rule Mining System

Authors: Adewale O. Ogunde, Olusegun Folorunso, Adesina S. Sodiya

Abstract:

Most existing Distributed Association Rule Mining (DARM) systems are still facing several challenges. One of such challenges that have not received the attention of many researchers is the inability of existing systems to adapt to constantly changing databases and mining environments. In this work, an Adaptive Incremental Mining Algorithm (AIMA) is therefore proposed to address these problems. AIMA employed multiple mobile agents for the entire mining process. AIMA was designed to adapt to changes in the distributed databases by mining only the incremental database updates and using this to update the existing rules in order to improve the overall response time of the DARM system. In AIMA, global association rules were integrated incrementally from one data site to another through Results Integration Coordinating Agents. The mining agents in AIMA were made adaptive by defining mining goals with reasoning and behavioral capabilities and protocols that enabled them to either maintain or change their goals. AIMA employed Java Agent Development Environment Extension for designing the internal agents’ architecture. Results from experiments conducted on real datasets showed that the adaptive system, AIMA performed better than the non-adaptive systems with lower communication costs and higher task completion rates.

Keywords: adaptivity, data mining, distributed association rule mining, incremental mining, mobile agents

Procedia PDF Downloads 369
762 The Development of Chinese-English Homophonic Word Pairs Databases for English Teaching and Learning

Authors: Yuh-Jen Wu, Chun-Min Lin

Abstract:

Homophonic words are common in Mandarin Chinese which belongs to the tonal language family. Using homophonic cues to study foreign languages is one of the learning techniques of mnemonics that can aid the retention and retrieval of information in the human memory. When learning difficult foreign words, some learners transpose them with words in a language they are familiar with to build an association and strengthen working memory. These phonological clues are beneficial means for novice language learners. In the classroom, if mnemonic skills are used at the appropriate time in the instructional sequence, it may achieve their maximum effectiveness. For Chinese-speaking students, proper use of Chinese-English homophonic word pairs may help them learn difficult vocabulary. In this study, a database program is developed by employing Visual Basic. The database contains two corpora, one with Chinese lexical items and the other with English ones. The Chinese corpus contains 59,053 Chinese words that were collected by a web crawler. The pronunciations of this group of words are compared with words in an English corpus based on WordNet, a lexical database for the English language. Words in both databases with similar pronunciation chunks and batches are detected. A total of approximately 1,000 Chinese lexical items are located in the preliminary comparison. These homophonic word pairs can serve as a valuable tool to assist Chinese-speaking students in learning and memorizing new English vocabulary.

Keywords: Chinese, corpus, English, homophonic words, vocabulary

Procedia PDF Downloads 155
761 Spatial-Temporal Clustering Characteristics of Dengue in the Northern Region of Sri Lanka, 2010-2013

Authors: Sumiko Anno, Keiji Imaoka, Takeo Tadono, Tamotsu Igarashi, Subramaniam Sivaganesh, Selvam Kannathasan, Vaithehi Kumaran, Sinnathamby Noble Surendran

Abstract:

Dengue outbreaks are affected by biological, ecological, socio-economic and demographic factors that vary over time and space. These factors have been examined separately and still require systematic clarification. The present study aimed to investigate the spatial-temporal clustering relationships between these factors and dengue outbreaks in the northern region of Sri Lanka. Remote sensing (RS) data gathered from a plurality of satellites were used to develop an index comprising rainfall, humidity and temperature data. RS data gathered by ALOS/AVNIR-2 were used to detect urbanization, and a digital land cover map was used to extract land cover information. Other data on relevant factors and dengue outbreaks were collected through institutions and extant databases. The analyzed RS data and databases were integrated into geographic information systems, enabling temporal analysis, spatial statistical analysis and space-time clustering analysis. Our present results showed that increases in the number of the combination of ecological factor and socio-economic and demographic factors with above the average or the presence contribute to significantly high rates of space-time dengue clusters.

Keywords: ALOS/AVNIR-2, dengue, space-time clustering analysis, Sri Lanka

Procedia PDF Downloads 451
760 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marylin Wolf

Abstract:

This paper describes the trade-offs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The back-end consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: flask, Java, JavaScript, health monitoring, long-term care, Mongo, Python, smart home, software engineering, webserver

Procedia PDF Downloads 365
759 3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems

Authors: Ahmed Fradi

Abstract:

Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.

Keywords: CAD, 3D object retrieval, shape based retrieval, similarity calculation

Procedia PDF Downloads 237
758 Birth Path and the Vitality of Caring Models in the Continuity of Midwifery

Authors: Elnaz Lalezari, Ramin Ghasemi Shaya

Abstract:

The birth way is influenced by a fracture within the quiet care handle, making a brokenness of this final one. The pregnant lady has got to interface with numerous experts, both amid the pregnancy, the childbirth, and the puerperium. Be that as it may, amid the final ten a long time, there has been an expanding of the pregnancy care worked by the midwife, who is considered to be the administrator with the correct competences, who can beware of each pregnancy and may profit herself of other professionals' commitments in arrange to make strides the results of maternal and neonatal health. To confirm whether there are proofs of viability that bolster the caseload birthing assistance care show, and in case it is conceivable to apply this show within the birth way in Italy. A amendment of writing has been done utilizing a few look motor (Google, Bing) and particular databases (MEDLINE, CINAHL, Embase, Domestic - ClinicalTrials.gov). There has, too, been a discussion of the Italian directions, the national rules, and the proposals of WHO. Results: The look string, legitimately adjusted to the three databases, has given the taking after comes about: MEDLINE 64 articles, CINAHL 94 articles, Embase 88 articles. From this choice, 14 articles have been extricated: 1 orderly survey, 3 controlled arbitrary trial, 7 observational ponders, 3 subjective studies. The caseload maternity care appears to be an successful and dependable organisational/caring strategy. It reacts to the criterions of quality and security, to the requirements of ladies not as it were amid the pregnancy but moreover amid the post-partum stage. For these reasons, it appears exceptionally valuable also for the birth way within the Italian reality.

Keywords: midwifery, care, caseload, maternity

Procedia PDF Downloads 112
757 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL

Procedia PDF Downloads 279
756 Scientific Production on Lean Supply Chains Published in Journals Indexed by SCOPUS and Web of Science Databases: A Bibliometric Study

Authors: T. Botelho de Sousa, F. Raphael Cabral Furtado, O. Eduardo da Silva Ferri, A. Batista, W. Augusto Varella, C. Eduardo Pinto, J. Mimar Santa Cruz Yabarrena, S. Gibran Ruwer, F. Müller Guerrini, L. Adalberto Philippsen Júnior

Abstract:

Lean Supply Chain Management (LSCM) is an emerging research field in Operations Management (OM). As a strategic model that focuses on reduced cost and waste with fulfilling the needs of customers, LSCM attracts great interest among researchers and practitioners. The purpose of this paper is to present an overview of Lean Supply Chains literature, based on bibliometric analysis through 57 papers published in indexed journals by SCOPUS and/or Web of Science databases. The results indicate that the last three years (2015, 2016, and 2017) were the most productive on LSCM discussion, especially in Supply Chain Management and International Journal of Lean Six Sigma journals. India, USA, and UK are the most productive countries; nevertheless, cross-country studies by collaboration among researchers were detected, by social network analysis, as a research practice, appearing to play a more important role on LSCM studies. Despite existing limitation, such as limited indexed journal database, bibliometric analysis helps to enlighten ongoing efforts on LSCM researches, including most used technical procedures and collaboration network, showing important research gaps, especially, for development countries researchers.

Keywords: Lean Supply Chains, Bibliometric Study, SCOPUS, Web of Science

Procedia PDF Downloads 317
755 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 131
754 Structuring After-School Physical Education Programs That are Engaging, Diverse, and Inclusive

Authors: Micah J. Dobson

Abstract:

After-school programs of physical education provide children with opportunities to engage in physical activities while developing healthy habits. To ensure that these programs are inclusive, diverse, and engaging, however, schools must consider various factors when designing and implementing them. This study sought to bring out efficient strategies for structuring after-school programs of physical education. The literature review was conducted using various databases and search engines. Some databases that index the journals include ERIC, Google Scholar, Scopus, Web of Science, and EBSCOhost. The search terms were combinations of keywords such as “after-school,” “physical education,” “inclusion,” “diversity,” “engagement,” “program design,” “program implementation,” “program effectiveness,” and “best practices.” The findings of this study suggest that schools that desire inclusivity must consider four key factors when designing and implementing after-school physical education programs. First, the programs must be designed with variety and fun by incorporating activities such as dance, sports, and games that appeal to all students. Second, instructors must be trained to create supportive and positive environments that foster student engagement while promoting physical literacy. Third, schools must collaborate with community stakeholders and organizations to ensure that programs are culturally inclusive and responsive. Fourth, schools can incorporate technology into their programs to enhance engagement and provide additional growth and learning opportunities.In conclusion, this study provides valuable insights into efficient strategies for structuring after-school programs of physical education that are inclusive, diverse, and engaging for all students. By considering these factors when designing and implementing their programs, schools can promote physical activity while supporting students’ overall well-being and health.

Keywords: after-school programs of physical education, community partnership, inclusivity, instructor training, technology

Procedia PDF Downloads 52
753 Recent Advances in Data Warehouse

Authors: Fahad Hanash Alzahrani

Abstract:

This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.

Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing

Procedia PDF Downloads 370
752 Ageing Patterns and Concerns in the Arabian Gulf: A Systematic Review

Authors: Asharaf Abdul Salam

Abstract:

Arabian Gulf countries have norms and rules different from others: having an exodus of male immigrant labor contract holders of age 20-60 years as a floating population. Such a demographic scenario camouflages population ageing. However, it is observed on examining vigilantly, not only in the native population but also in the general population. This research on population ageing in the Arabian Gulf examines ageing scenario and concerns through analyses of international databases (US Census Bureau and United Nations) and national level databases (Censuses and Surveys) apart from a review of published research. Transitions in demography and epidemiology lead to gains in life expectancy and thereby reductions in fertility, leading to ageing of the population in the region. Even after bringing adult immigrants, indices and age pyramids show an increasing ageing trend in the total population, demonstrating an ageing workforce. Besides, the exclusive native population analysis reveals a trend of expansive pyramids (pre-transitional stage) turning to constrictive (transition stage) and cylindrical (post-transition stage) shapes. Age-based indices such as the index of ageing, age dependency ratio, and median age confirm this trend. While the feminine nature of ageing is vivid, gains in life expectancy and causes of death in old age, indicating co-morbidity compression, are concerns to ageing. Preparations are in demand to cope with ageing from different dimensions, as explained in the United Nations Plans of Action. A strategy of strengthening informal care with supportive semi-formal and supplementary formal care networks would alleviate this crisis associated with population ageing.

Keywords: total versus native population, indices of ageing, age pyramids, feminine nature, comorbidity compression, strategic interventions

Procedia PDF Downloads 65
751 Management of Cultural Heritage: Bologna Gates

Authors: Alfonso Ippolito, Cristiana Bartolomei

Abstract:

A growing demand is felt today for realistic 3D models enabling the cognition and popularization of historical-artistic heritage. Evaluation and preservation of Cultural Heritage is inextricably connected with the innovative processes of gaining, managing, and using knowledge. The development and perfecting of techniques for acquiring and elaborating photorealistic 3D models, made them pivotal elements for popularizing information of objects on the scale of architectonic structures.

Keywords: cultural heritage, databases, non-contact survey, 2D-3D models

Procedia PDF Downloads 393
750 Clinical Effectiveness of Bulk-fill Resin Composite: A Review

Authors: Taraneh Estedlal

Abstract:

The objective of this study was to review in-vivo and in-vitro studies to compare the effectiveness of bulk-fill and conventional resin composites with regard to marginal adaptation, polymerization shrinkage, and other mechanical properties.PubMed and Scopus databases was investigated for in-vitro studies and randomized clinical trials comparing incidence of fractures, color stability, marginal adaptation, pain and discomfort, recurrent caries, occlusion, pulpal reaction, and proper proximal contacts of restorations made with conventional and bulk resins. The failure rate of conventional and flowable bulk-fill resin composites was not significantly different to sculptable bulk-fill resin composites. The objective of this study was to review in-vivo and in-vitro studies to compare the effectiveness of bulk-fill and conventional resin composites with regard to marginal adaptation, polymerization shrinkage, and other mechanical properties. PubMed and Scopus databases was investigated for in-vitro studies and randomized clinical trials comparing one of the pearlier mentioned properties between bulk-fill and control composites. Despite differences in physical and in-vitro properties, failure rate of conventional and flowable bulk-fill resin composites was not significantly different to sculptable bulk-fill resin composites.

Keywords: polymerization shrinkage, color stability, marginal adaptation, recurrent caries, occlusion, pulpal reaction

Procedia PDF Downloads 129
749 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 242