Search results for: database replication
1683 TomoTherapy® System Repositioning Accuracy According to Treatment Localization
Authors: Veronica Sorgato, Jeremy Belhassen, Philippe Chartier, Roddy Sihanath, Nicolas Docquiere, Jean-Yves Giraud
Abstract:
We analyzed the image-guided radiotherapy method used by the TomoTherapy® System (Accuray Corp.) for patient repositioning in clinical routine. The TomoTherapy® System computes X, Y, Z and roll displacements to match the reference CT, on which the dosimetry has been performed, with the pre-treatment MV CT. The accuracy of the repositioning method has been studied according to the treatment localization. For this, a database of 18774 treatment sessions, performed during 2 consecutive years (2016-2017 period) has been used. The database includes the X, Y, Z and roll displacements proposed by TomoTherapy® System as well as the manual correction of these proposals applied by the radiation therapist. This manual correction aims to further improve the repositioning based on the clinical situation and depends on the structures surrounding the target tumor tissue. The statistical analysis performed on the database aims to define repositioning limits to be used as security and guiding tool for the manual adjustment implemented by the radiation therapist. This tool will participate not only to notify potential repositioning errors but also to further improve patient positioning for optimal treatment.Keywords: accuracy, IGRT MVCT, image-guided radiotherapy megavoltage computed tomography, statistical analysis, tomotherapy, localization
Procedia PDF Downloads 2251682 Bundle Block Detection Using Spectral Coherence and Levenberg Marquardt Neural Network
Authors: K. Padmavathi, K. Sri Ramakrishna
Abstract:
This study describes a procedure for the detection of Left and Right Bundle Branch Block (LBBB and RBBB) ECG patterns using spectral Coherence(SC) technique and LM Neural Network. The Coherence function finds common frequencies between two signals and evaluate the similarity of the two signals. The QT variations of Bundle Blocks are observed in lead V1 of ECG. Spectral Coherence technique uses Welch method for calculating PSD. For the detection of normal and Bundle block beats, SC output values are given as the input features for the LMNN classifier. Overall accuracy of LMNN classifier is 99.5 percent. The data was collected from MIT-BIH Arrhythmia database.Keywords: bundle block, SC, LMNN classifier, welch method, PSD, MIT-BIH, arrhythmia database
Procedia PDF Downloads 2801681 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 3341680 Descriptive Analysis of the Database of Poliomyelitis Surveillance System in Mauritania from 2012-2019
Authors: B. Baba Ahmed, P. Yanogo, B. Djibryl. N. Medas
Abstract:
Introduction: Polio is a highly contagious viral infection, with children under 5 years of age being the most affected. It is a public health emergency of international concern. Polio surveillance in Mauritania has been ongoing since 1998 and has achieved "polio free" status in 2007. our objective is to analyse a pidemiological surveillance database of poliomyélitis in Mauritania from 2012-2019. Method: A transversal descriptive analysis of poliomyélitis database was carried out in Mauritania from 2012-2019.An exhaustive sampling was done on all suspected polio cases recorded in the database from 2012 -2019. This study used EPI-INFO 7.4 for frequency calculation for qualitative variables, mean and standard deviation for quantitative variables. Results: We found 459 suspected cases of polio over the study period with an average rate of acute non-polio flaccid paralysis of 25.4 cases/100,000 children under 15 years of age. The age group 0-6 years represented 75.2%. Males constituted 50.2%. Females represented 49.78% with a ratio of M/F=1.Among the 422 observations, the average age is 4 years +/- 3.38. The four regions, TIRIS-ZEMMOUR, INCHIRI, TAGANT, NOUACHCHOTT OUEST recorded the lowest percentages of notifications, respectively (3.28%; 3.93%; 4.37%; 4.8%). 99.34% [98.09-99.78] of cases presented acute flaccid paralysis. And 56.77% [52.19-61.23], had limb asymmetry. We showed that 82.93% [79.21-86.10], had fever. we found that 89.5% of suspected polio cases were investigated before 48 hours. And 88.39% of suspected cases had two adequate samples taken 48 hours apart and within 14 days after the onset of symptoms. Only 30.95% of samples arrived at the referral laboratory before 72 hours. Conclusion: This study has shown that Mauritania has achieved the objectives in most of the quantitative performance indicators of polio surveillance. This study has shown a low notification of cases in the northern and central regions of the country. There is a problem with the transport of samples to the laboratory.Keywords: analysis, data base, Epi-Info, polio
Procedia PDF Downloads 1751679 On the Design of a Secure Two-Party Authentication Scheme for Internet of Things Using Cancelable Biometrics and Physically Unclonable Functions
Authors: Behnam Zahednejad, Saeed Kosari
Abstract:
Widespread deployment of Internet of Things (IoT) has raised security and privacy issues in this environment. Designing a secure two-factor authentication scheme between the user and server is still a challenging task. In this paper, we focus on Cancelable Biometric (CB) as an authentication factor in IoT. We show that previous CB-based scheme fail to provide real two-factor security, Perfect Forward Secrecy (PFS) and suffer database attacks and traceability of the user. Then we propose our improved scheme based on CB and Physically Unclonable Functions (PUF), which can provide real two-factor security, PFS, user’s unlinkability, and resistance to database attack. In addition, Key Compromise Impersonation (KCI) resilience is achieved in our scheme. We also prove the security of our proposed scheme formally using both Real-Or-Random (RoR) model and the ProVerif analysis tool. For the usability of our scheme, we conducted a performance analysis and showed that our scheme has the least communication cost compared to the previous CB-based scheme. The computational cost of our scheme is also acceptable for the IoT environment.Keywords: IoT, two-factor security, cancelable biometric, key compromise impersonation resilience, perfect forward secrecy, database attack, real-or-random model, ProVerif
Procedia PDF Downloads 1001678 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform
Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar
Abstract:
It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)
Procedia PDF Downloads 5781677 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel, S. P. Dave, M. V Shah
Abstract:
Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity
Procedia PDF Downloads 671676 Partner Selection in International Strategic Alliances: The Case of the Information Industry
Authors: H. Nakamura
Abstract:
This study analyzes international strategic alliances in the information industry. The purpose of this study is to clarify the strategic intention of an international alliance. Secondly, it investigates the influence of differences in the target markets of partner companies on alliances. Using an international strategy theory approach to analyze the global strategies of global companies, the study compares a database business and an electronic publishing business. In particular, these cases emphasized factors attributable to "people" and "learning", reliability and communication between organizations and the evolution of the IT infrastructure. The theory evolved in this study validates the effectiveness of these strategies.Keywords: database business, electronic library, international strategic alliances, partner selection
Procedia PDF Downloads 3701675 The Development of Chinese-English Homophonic Word Pairs Databases for English Teaching and Learning
Authors: Yuh-Jen Wu, Chun-Min Lin
Abstract:
Homophonic words are common in Mandarin Chinese which belongs to the tonal language family. Using homophonic cues to study foreign languages is one of the learning techniques of mnemonics that can aid the retention and retrieval of information in the human memory. When learning difficult foreign words, some learners transpose them with words in a language they are familiar with to build an association and strengthen working memory. These phonological clues are beneficial means for novice language learners. In the classroom, if mnemonic skills are used at the appropriate time in the instructional sequence, it may achieve their maximum effectiveness. For Chinese-speaking students, proper use of Chinese-English homophonic word pairs may help them learn difficult vocabulary. In this study, a database program is developed by employing Visual Basic. The database contains two corpora, one with Chinese lexical items and the other with English ones. The Chinese corpus contains 59,053 Chinese words that were collected by a web crawler. The pronunciations of this group of words are compared with words in an English corpus based on WordNet, a lexical database for the English language. Words in both databases with similar pronunciation chunks and batches are detected. A total of approximately 1,000 Chinese lexical items are located in the preliminary comparison. These homophonic word pairs can serve as a valuable tool to assist Chinese-speaking students in learning and memorizing new English vocabulary.Keywords: Chinese, corpus, English, homophonic words, vocabulary
Procedia PDF Downloads 1811674 Resource Sharing Issues of Distributed Systems Influences on Healthcare Sector Concurrent Environment
Authors: Soo Hong Da, Ng Zheng Yao, Burra Venkata Durga Kumar
Abstract:
The Healthcare sector is a business that consists of providing medical services, manufacturing medical equipment and drugs as well as providing medical insurance to the public. Most of the time, the data stored in the healthcare database is to be related to patient’s information which is required to be accurate when it is accessed by authorized stakeholders. In distributed systems, one important issue is concurrency in the system as it ensures the shared resources to be synchronized and remains consistent through multiple read and write operations by multiple clients. The problems of concurrency in the healthcare sector are who gets the access and how the shared data is synchronized and remains consistent when there are two or more stakeholders attempting to the shared data simultaneously. In this paper, a framework that is beneficial to distributed healthcare sector concurrent environment is proposed. In the proposed framework, four different level nodes of the database, which are national center, regional center, referral center, and local center are explained. Moreover, the frame synchronization is not symmetrical. There are two synchronization techniques, which are complete and partial synchronization operation are explained. Furthermore, when there are multiple clients accessed at the same time, synchronization types are also discussed with cases at different levels and priorities to ensure data is synchronized throughout the processes.Keywords: resources, healthcare, concurrency, synchronization, stakeholders, database
Procedia PDF Downloads 1471673 Decision Support System for Tourism in Northern Part of Thailand
Authors: Katejarinporn Chaiya, Thawit Janbanklong
Abstract:
The purposes of this study were to design and find users’ satisfaction after using the decision support system for tourism in the Northern part of Thailand, which can provide tourists with touristic information and plan their personal voyage. Such information can be retrieved systematically based on personal budget and provinces. The samples of this study were five experts and users: 30 "white collars" in Bangkok. This decision support system was designed via ASP.NET. Its database was developed by using MySQL, for administrators to effectively manage the database. The application outcome revealed that the innovation works properly as sought in objectives. Specialists and white collars in Bangkok have evaluated the decision support system; the result was satisfactorily positive.Keywords: decision Support System, ASP.NET, MySQL, white collars
Procedia PDF Downloads 3561672 A.T.O.M.- Artificial Intelligent Omnipresent Machine
Authors: R. Kanthavel, R. Yogesh Kumar, T. Narendrakumar, B. Santhosh, S. Surya Prakash
Abstract:
This paper primarily focuses on developing an affordable personal assistant and the implementation of it in the field of Artificial Intelligence (AI) to create a virtual assistant/friend. The problem in existing home automation techniques is that it requires the usage of exact command words present in the database to execute the corresponding task. Our proposed work is ATOM a.k.a ‘Artificial intelligence Talking Omnipresent Machine’. Our inspiration came from an unlikely source- the movie ‘Iron Man’ in which a character called J.A.R.V.I.S has omnipresence, and device controlling capability. This device can control household devices in real time and send the live information to the user. This device does not require the user to utter the exact commands specified in the database as it can capture the keywords from the uttered commands, correlates the obtained keywords and perform the specified task. This ability to compare and correlate the keywords gives the user the liberty to give commands which are not necessarily the exact words provided in the database. The proposed work has a higher flexibility (due to its keyword extracting ability from the user input) comparing to the existing work Intelligent Home automation System (IHAS), is more accurate, and is much more affordable as it makes use of WI-FI module and raspberry pi 2 instead of ZigBee and a computer respectively.Keywords: home automation, speech recognition, voice control, personal assistant, artificial intelligence
Procedia PDF Downloads 3351671 Bonding Characteristics Between FRP and Concrete Substrates
Authors: Houssam A. Toutanji, Meng Han
Abstract:
This study focuses on the development of a fracture mechanics based-model that predicts the debonding behavior of FRP strengthened RC beams. In this study, a database includes 351 concrete prisms bonded with FRP plates tested in single and double shear were prepared. The existing fracture-mechanics-based models are applied to this database. Unfortunately the properties of adhesive layer, especially a soft adhesive layer, used on the specimens in the existing studies were not always able to found. Thus, the new model’s proposal was based on fifteen newly conducted pullout tests and twenty four data selected from two independent existing studies with the application of a soft adhesive layers and the availability of adhesive properties.Keywords: carbon fiber composite materials, interface response, fracture characteristics, maximum shear stress, ultimate transferable load
Procedia PDF Downloads 2661670 Prevalence and Risk Factors of Economic Toxicity in Gynecologic Malignancies: A Systematic Review
Authors: Dongliu Li
Abstract:
Objective: This study systematically evaluates the incidence and influencing factors of economic toxicity in patients with gynecological malignant tumors. Methods: Literature on economic toxicity of gynecological malignancies were comprehensively searched in Pubmed, The Cochrane Library, Web of Science, Embase, CINAHL, CNKI, Wanfang Database, Chinese Biomedical Literature database and VIP database. The search period is up to February 2024. Stata 17 software was used to conduct a single-group meta-analysis of the incidence of economic toxicity in gynecological malignant tumors, and descriptive analysis was used to analyze the influencing factors. Results: A total of 11 pieces of literature were included, including 6475 patients with gynecological malignant tumors. The results of the meta-analysis showed that the incidence of economic toxicity in gynecological malignant tumors was 40% (95%CI 31%—48%). The influencing factors of economic toxicity in patients with gynecological malignant tumors include social demographic factors, medical insurance-related factors and disease-related factors. Conclusion: The incidence of economic toxicity in patients with gynecological malignant tumors is high, and medical staff should conduct early screening of patients according to relevant influencing factors, personalized assessment of patients' economic status, early prevention work and personalized intervention measures.Keywords: gynecological malignancy, economic toxicity, the incidence rate, influencing factors, systematic review
Procedia PDF Downloads 261669 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 3691668 An Attempt to Decipher the Meaning of a Mithraic Motif
Authors: Attila Simon
Abstract:
The subject of this research is an element of Mithras' iconography. It is a new element in the series of research begun with the study of the Bull in the Boat motif. The stylized altars represented by the seven adjacent rectangles appear on only a small group of Mithraic reliefs, which may explain why they have received less attention and fewer attempts at decipherment than other motifs. Using Vermaseren's database, CIMRM (Corpus Inscriptionum et Monumentorum Religionis Mithriacae), one collected all the cases containing the motif under investigation, created a database of them grouped by location, then used a comparative method to compare the different forms of the motif and to isolate these cases, and finally evaluated the results. The aim of this research is to interpret the iconographic element in question and attempt to determine its place of origin. The study may provide an interpretation of a Mithraic representation that, to the best of the author's knowledge, has not been explained so far, and the question may generate scientific discourses.Keywords: roman history, religion, Mithras, iconography
Procedia PDF Downloads 871667 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.Keywords: virtual reality, effective computing, effective VR, emotion-based effective physiological database
Procedia PDF Downloads 2311666 COVID-19 and Heart Failure Outcomes: Readmission Insights from the 2020 United States National Readmission Database
Authors: Induja R. Nimma, Anand Reddy Maligireddy, Artur Schneider, Melissa Lyle
Abstract:
Background: Although heart failure is one of the most common causes of hospitalization in adult patients, there is limited knowledge on outcomes following initial hospitalization for COVID-19 with heart failure (HCF-19). We felt it pertinent to analyze 30-day readmission causes and outcomes among patients with HCF-19 using the United States using real-world big data via the National readmission database. Objective: The aim is to describe the rate and causes of readmissions and morbidity of heart failure with coinciding COVID-19 (HFC-19) in the United States, using the 2020 National Readmission Database (NRD). Methods: A descriptive, retrospective study was conducted on the 2020 NRD, a nationally representative sample of all US hospitalizations. Adult (>18 years) inpatient admissions with COVID-19 with HF and readmissions in 30 days were selected based on the International Classification of Diseases-Tenth Revision, Procedure Code. Results: In 2020, 2,60,372 adult patients were hospitalized with COVID-19 and HF. The median age was 74 (IQR: 64-83), and 47% were female. The median length of stay was 7(4-13) days, and the total cost of stay was 62,025 (31,956 – 130,670) United States dollars, respectively. Among the index hospital admissions, 61,527 (23.6%) died, and 22,794 (11.5%) were readmitted within 30 days. The median age of patients readmitted in 30 days was 73 (63-82), 45% were female, and 1,962 (16%) died. The most common principal diagnosis for readmission in these patients was COVID-19= 34.8%, Sepsis= 16.5%, HF = 7.1%, AKI = 2.2%, respiratory failure with hypoxia =1.7%, and Pneumonia = 1%. Conclusion: The rate of readmission in patients with heart failure exacerbations is increasing yearly. COVID-19 was observed to be the most common principal diagnosis in patients readmitted within 30 days. Complicated hypertension, chronic pulmonary disease, complicated diabetes, renal failure, alcohol use, drug use, and peripheral vascular disorders are risk factors associated with readmission. Familiarity with the most common causes and predictors for readmission helps guide the development of initiatives to minimize adverse outcomes and the cost of medical care.Keywords: Covid-19, heart failure, national readmission database, readmission outcomes
Procedia PDF Downloads 781665 Using India’s Traditional Knowledge Digital Library on Traditional Tibetan Medicine
Authors: Chimey Lhamo, Ngawang Tsering
Abstract:
Traditional Tibetan medicine, known as Sowa Rigpa (Science of healing), originated more than 2500 years ago with an insightful background, and it has been growing significant attention in many Asian countries like China, India, Bhutan, and Nepal. Particularly, the Indian government has targeted Traditional Tibetan medicine as its major Indian medical system, including Ayurveda. Although Traditional Tibetan medicine has been growing interest and has a long history, it is not easily recognized worldwide because it exists only in the Tibetan language and it is neither accessible nor understood by patent examiners at the international patent office, data about Traditional Tibetan medicine is not yet broadly exist in the Internet. There has also been the exploitation of traditional Tibetan medicine increasing. The Traditional Knowledge Digital Library is a database aiming to prevent the patenting and misappropriation of India’s traditional medicine knowledge by using India’s Traditional knowledge Digital Library on Sowa Rigpa in order to prevent its exploitation at international patent with the help of information technology tools and an innovative classification systems-traditional knowledge resource classification (TKRC). As of date, more than 3000 Sowa Rigpa formulations have been transcribed into a Traditional Knowledge Digital Library database. In this paper, we are presenting India's Traditional Knowledge Digital Library for Traditional Tibetan medicine, and this database system helps to preserve and prevent the exploitation of Sowa Rigpa. Gradually it will be approved and accepted globally.Keywords: traditional Tibetan medicine, India's traditional knowledge digital library, traditional knowledge resources classification, international patent classification
Procedia PDF Downloads 1261664 A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.Keywords: pattern recognition, global terrorism database, Manhattan distance, k-means clustering, terrorism data analysis
Procedia PDF Downloads 3851663 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 3231662 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok
Authors: Supattra Kanchanopast
Abstract:
The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.Keywords: convenience store, the management information system, inventory management, 7 eleven shop
Procedia PDF Downloads 4801661 New Approaches for the Handwritten Digit Image Features Extraction for Recognition
Authors: U. Ravi Babu, Mohd Mastan
Abstract:
The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.Keywords: handwritten digit recognition, distance measure, MNIST database, image features
Procedia PDF Downloads 4601660 Thermochemical Study of the Degradation of the Panels of Wings in a Space Shuttle by Utilization of HSC Chemistry Software and Its Database
Authors: Ahmed Ait Hou
Abstract:
The wing leading edge and nose cone of the space shuttle are fabricated from a reinforced carbon/carbon material. This material attains its durability from a diffusion coating of silicon carbide (SiC) and a glass sealant. During re-entry into the atmosphere, this material is subject to an oxidizing high-temperature environment. The use of thermochemical calculations resulting at the HSC CHEMISTRY software and its database allows us to interpret the phenomena of oxidation and chloridation observed on the wing leading edge and nose cone of the space shuttle during its mission in space. First study is the monitoring of the oxidation reaction of SiC. It has been demonstrated that thermal oxidation of the SiC gives the two compounds SiO₂(s) and CO(g). In the extreme conditions of very low oxygen partial pressures and high temperatures, there is a reaction between SiC and SiO₂, leading to SiO(g) and CO(g). We had represented the phase stability diagram of Si-C-O system calculated by the use of the HSC Chemistry at 1300°C. The principal characteristic of this diagram of predominance is the line of SiC + SiO₂ coexistence. Second study is the monitoring of the chloridation reaction of SiC. The other problem encountered in addition to oxidation is the phenomenon of chloridation due to the presence of NaCl. Indeed, after many missions, the leading edge wing surfaces have exhibited small pinholes. We have used the HSC Chemistry database to analyze these various reactions. Our calculations concorde with the phenomena we announced in research work resulting in NASA LEWIS Research center.Keywords: thermochchemicals calculations, HSC software, oxidation and chloridation, wings in space
Procedia PDF Downloads 1221659 Risk of Heatstroke Occurring in Indoor Built Environment Determined with Nationwide Sports and Health Database and Meteorological Outdoor Data
Authors: Go Iwashita
Abstract:
The paper describes how the frequencies of heatstroke occurring in indoor built environment are related to the outdoor thermal environment with big statistical data. As the statistical accident data of heatstroke, the nationwide accident data were obtained from the National Agency for the Advancement of Sports and Health (NAASH) . The meteorological database of the Japanese Meteorological Agency supplied data about 1-hour average temperature, humidity, wind speed, solar radiation, and so forth. Each heatstroke data point from the NAASH database was linked to the meteorological data point acquired from the nearest meteorological station where the accident of heatstroke occurred. This analysis was performed for a 10-year period (2005–2014). During the 10-year period, 3,819 cases of heatstroke were reported in the NAASH database for the investigated secondary/high schools of the nine Japanese representative cities. Heatstroke most commonly occurred in the outdoor schoolyard at a wet-bulb globe temperature (WBGT) of 31°C and in the indoor gymnasium during athletic club activities at a WBGT > 31°C. The determined accident ratio (number of accidents during each club activity divided by the club’s population) in the gymnasium during the female badminton club activities was the highest. Although badminton is played in a gymnasium, these WBGT results show that the risk level during badminton under hot and humid conditions is equal to that of baseball or rugby played in the schoolyard. Except sports, the high risk of heatstroke was observed in schools houses during cultural activities. The risk level for indoor environment under hot and humid condition would be equal to that for outdoor environment based on the above results of WBGT. Therefore control measures against hot and humid indoor condition were needed as installing air conditions not only schools but also residences.Keywords: accidents in schools, club activity, gymnasium, heatstroke
Procedia PDF Downloads 2121658 Voice Liveness Detection Using Kolmogorov Arnold Networks
Authors: Arth J. Shah, Madhu R. Kamble
Abstract:
Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection
Procedia PDF Downloads 381657 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 4011656 Oncolytic Efficacy of Thymidine Kinase-Deleted Vaccinia Virus Strain Tiantan (oncoVV-TT) in Glioma
Authors: Seyedeh Nasim Mirbahari, Taha Azad, Mehdi Totonchi
Abstract:
Oncolytic viruses, which only replicate in tumor cells, are being extensively studied for their use in cancer therapy. A particular virus known as the vaccinia virus, a member of the poxvirus family, has demonstrated oncolytic abilities glioma. Treating Glioma with traditional methods such as chemotherapy and radiotherapy is quite challenging. Even though oncolytic viruses have shown immense potential in cancer treatment, their effectiveness in glioblastoma treatment is still low. Therefore, there is a need to improve and optimize immunotherapies for better results. In this study, we have designed oncoVV-TT, which can more effectively target tumor cells while minimizing replication in normal cells by replacing the thymidine kinase gene with a luc-p2a-GFP gene expression cassette. Human glioblastoma cell line U251 MG, rat glioblastoma cell line C6, and non-tumor cell line HFF were plated at 105 cells in a 12-well plates in 2 mL of DMEM-F2 medium with 10% FBS added to each well. Then incubated at 37°C. After 16 hours, the cells were treated with oncoVV-TT at an MOI of 0.01, 0.1 and left in the incubator for a further 24, 48, 72 and 96 hours. Viral replication assay, fluorescence imaging and viability tests, including trypan blue and crystal violet, were conducted to evaluate the cytotoxic effect of oncoVV-TT. The finding shows that oncoVV-TT had significantly higher cytotoxic activity and proliferation rates in tumor cells in a dose and time-dependent manner, with the strongest effect observed in U251 MG. To conclude, oncoVV-TT has the potential to be a promising oncolytic virus for cancer treatment, with a more cytotoxic effect in human glioblastoma cells versus rat glioma cells. To assess the effectiveness of vaccinia virus-mediated viral therapy, we have tested U251mg and C6 tumor cell lines taken from human and rat gliomas, respectively. The study evaluated oncoVV-TT's ability to replicate and lyse cells and analyzed the survival rates of the tested cell lines when treated with different doses of oncoVV-TT. Additionally, we compared the sensitivity of human and mouse glioma cell lines to the oncolytic vaccinia virus. All experiments regarding viruses were conducted under biosafety level 2. We engineered a Vaccinia-based oncolytic virus called oncoVV-TT to replicate specifically in tumor cells. To propagate the oncoVV-TT virus, HeLa cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 10 MOI virus was added. After 48 h, cells were harvested by scraping, and viruses were collected by 3 sequential freezing and thawing cycles followed by removal of cell debris by centrifugation (1500 rpm, 5 min). The supernatant was stored at −80 ◦C for the following experiments. To measure the replication of the virus in Hela, cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 5 MOI virus or equal dilution of PBS was added. At the treatment time of 0 h, 24 h, 48 h, 72 h and 96 h, the viral titers were determined under the fluorescence microscope (BZ-X700; Keyence, Osaka, Japan). Fluorescence intensity was quantified using the imagej software according to the manufacturer’s protocol. For the isolation of single-virus clones, HeLa cells seeded in six-well plates (5×105 cells/well). After 24 h (100% confluent), the cells were infected with a 10-fold dilution series of TianTan green fluorescent protein (GFP)virus and incubated for 4 h. To examine the cytotoxic effect of oncoVV-TT virus ofn U251mg and C6 cell, trypan blue and crystal violet assay was used.Keywords: oncolytic virus, immune therapy, glioma, vaccinia virus
Procedia PDF Downloads 771655 Preserving Digital Arabic Text Integrity Using Blockchain Technology
Authors: Zineb Touati Hamad, Mohamed Ridda Laouar, Issam Bendib
Abstract:
With the massive development of technology today, the Arabic language has gained a prominent position among the languages most used for writing articles, expressing opinions, and also for citing in many websites, defying its growing sensitivity in terms of structure, language skills, diacritics, writing methods, etc. In the context of the spread of the Arabic language, the Holy Quran represents the most prevalent Arabic text today in many applications and websites for citation purposes or for the reading and learning rituals. The Quranic verses / surahs are published quickly and without cost, which may cause great concern to ensure the safety of the content from tampering and alteration. To protect the content of texts from distortion, it is necessary to refer to the original database and conduct a comparison process to extract the percentage of distortion. The disadvantage of this method is that it takes time, in addition to the lack of any guarantee on the integrity of the database itself as it belongs to one central party. Blockchain technology today represents the best way to maintain immutable content. Blockchain is a distributed database that stores information in blocks linked to each other through encryption, where the modification of each block can be easily known. To exploit these advantages, we seek in this paper to justify the use of this technique in preserving the integrity of Arabic texts sensitive to change by building a decentralized framework to authenticate and verify the integrity of the digital Quranic verses/surahs spread on websites.Keywords: arabic text, authentication, blockchain, integrity, quran, verification
Procedia PDF Downloads 1621654 Aptamers: A Potential Strategy for COVID-19 Treatment
Authors: Mohamad Ammar Ayass, Natalya Griko, Victor Pashkov, Wanying Cao, Kevin Zhu, Jin Zhang, Lina Abi Mosleh
Abstract:
Respiratory syndrome coronavirus 2 (SARS-CoV-2) is the causative agent for coronavirus disease 2019 (COVID-19). Early evidence pointed at the angiotensin-converting enzyme 2 (ACE-2) expressed on the epithelial cells of the lung as the main entry point of SARS-CoV-2 into the cells. The viral entry is mediated by the binding of the Receptor Binding Domain (RBD) of the spike protein that is expressed on the surface of the virus to the ACE-2 receptor. As the number of SARS-CoV-2 variants continues to increase, mutations arising in the RBD of SARS-CoV-2 may lead to the ineffectiveness of RBD targeted neutralizing antibodies. To address this limitation, the objective of this study is to develop a combination of aptamers that target different regions of the RBD, preventing the binding of the spike protein to ACE-2 receptor and subsequent viral entry and replication. A safe and innovative biomedical tool was developed to inhibit viral infection and reduce the harms of COVID-19. In the present study, DNA aptamers were developed against a recombinant trimer S protein using the Systematic Evolution of Ligands by Exponential enrichment (SELEX). Negative selection was introduced at round number 7 to select for aptamers that bind specifically to the RBD domain. A series of 9 aptamers (ADI2010, ADI2011, ADI201L, ADI203L, ADI205L, ADIR68, ADIR74, ADIR80, ADIR83) were selected and characterized with high binding affinity and specificity to the RBD of the spike protein. Aptamers (ADI25, ADI2009, ADI203L) were able to bind and pull down endogenous spike protein expressed on the surface of SARS-CoV-2 virus in COVID-19 positive patient samples and determined by liquid chromatography- tandem mass spectrometry analysis (LC-MS/MS). LC-MS/MS data confirmed that aptamers can bind to the RBD of the spike protein. Furthermore, results indicated that the combination of the 9 best aptamers inhibited the binding of the purified trimer spike protein to the ACE-2 receptor found on the surface of Vero E6 cells. In the same experiment, the combined aptamers displayed a better neutralizing effect than antibodies. The data suggests that the selected aptamers could be used in therapy to neutralize the effect of the SARS-CoV-2 virus by inhibiting the interaction between the RBD and ACE-2 receptor, preventing viral entry into target cells and therefore blocking viral replication.Keywords: aptamer, ACE-2 receptor, binding inhibitor, COVID-19, spike protein, SARS-CoV-2, treatment
Procedia PDF Downloads 183