Search results for: overton database
1517 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 3701516 An Attempt to Decipher the Meaning of a Mithraic Motif
Authors: Attila Simon
Abstract:
The subject of this research is an element of Mithras' iconography. It is a new element in the series of research begun with the study of the Bull in the Boat motif. The stylized altars represented by the seven adjacent rectangles appear on only a small group of Mithraic reliefs, which may explain why they have received less attention and fewer attempts at decipherment than other motifs. Using Vermaseren's database, CIMRM (Corpus Inscriptionum et Monumentorum Religionis Mithriacae), one collected all the cases containing the motif under investigation, created a database of them grouped by location, then used a comparative method to compare the different forms of the motif and to isolate these cases, and finally evaluated the results. The aim of this research is to interpret the iconographic element in question and attempt to determine its place of origin. The study may provide an interpretation of a Mithraic representation that, to the best of the author's knowledge, has not been explained so far, and the question may generate scientific discourses.Keywords: roman history, religion, Mithras, iconography
Procedia PDF Downloads 901515 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.Keywords: virtual reality, effective computing, effective VR, emotion-based effective physiological database
Procedia PDF Downloads 2331514 COVID-19 and Heart Failure Outcomes: Readmission Insights from the 2020 United States National Readmission Database
Authors: Induja R. Nimma, Anand Reddy Maligireddy, Artur Schneider, Melissa Lyle
Abstract:
Background: Although heart failure is one of the most common causes of hospitalization in adult patients, there is limited knowledge on outcomes following initial hospitalization for COVID-19 with heart failure (HCF-19). We felt it pertinent to analyze 30-day readmission causes and outcomes among patients with HCF-19 using the United States using real-world big data via the National readmission database. Objective: The aim is to describe the rate and causes of readmissions and morbidity of heart failure with coinciding COVID-19 (HFC-19) in the United States, using the 2020 National Readmission Database (NRD). Methods: A descriptive, retrospective study was conducted on the 2020 NRD, a nationally representative sample of all US hospitalizations. Adult (>18 years) inpatient admissions with COVID-19 with HF and readmissions in 30 days were selected based on the International Classification of Diseases-Tenth Revision, Procedure Code. Results: In 2020, 2,60,372 adult patients were hospitalized with COVID-19 and HF. The median age was 74 (IQR: 64-83), and 47% were female. The median length of stay was 7(4-13) days, and the total cost of stay was 62,025 (31,956 – 130,670) United States dollars, respectively. Among the index hospital admissions, 61,527 (23.6%) died, and 22,794 (11.5%) were readmitted within 30 days. The median age of patients readmitted in 30 days was 73 (63-82), 45% were female, and 1,962 (16%) died. The most common principal diagnosis for readmission in these patients was COVID-19= 34.8%, Sepsis= 16.5%, HF = 7.1%, AKI = 2.2%, respiratory failure with hypoxia =1.7%, and Pneumonia = 1%. Conclusion: The rate of readmission in patients with heart failure exacerbations is increasing yearly. COVID-19 was observed to be the most common principal diagnosis in patients readmitted within 30 days. Complicated hypertension, chronic pulmonary disease, complicated diabetes, renal failure, alcohol use, drug use, and peripheral vascular disorders are risk factors associated with readmission. Familiarity with the most common causes and predictors for readmission helps guide the development of initiatives to minimize adverse outcomes and the cost of medical care.Keywords: Covid-19, heart failure, national readmission database, readmission outcomes
Procedia PDF Downloads 791513 Using India’s Traditional Knowledge Digital Library on Traditional Tibetan Medicine
Authors: Chimey Lhamo, Ngawang Tsering
Abstract:
Traditional Tibetan medicine, known as Sowa Rigpa (Science of healing), originated more than 2500 years ago with an insightful background, and it has been growing significant attention in many Asian countries like China, India, Bhutan, and Nepal. Particularly, the Indian government has targeted Traditional Tibetan medicine as its major Indian medical system, including Ayurveda. Although Traditional Tibetan medicine has been growing interest and has a long history, it is not easily recognized worldwide because it exists only in the Tibetan language and it is neither accessible nor understood by patent examiners at the international patent office, data about Traditional Tibetan medicine is not yet broadly exist in the Internet. There has also been the exploitation of traditional Tibetan medicine increasing. The Traditional Knowledge Digital Library is a database aiming to prevent the patenting and misappropriation of India’s traditional medicine knowledge by using India’s Traditional knowledge Digital Library on Sowa Rigpa in order to prevent its exploitation at international patent with the help of information technology tools and an innovative classification systems-traditional knowledge resource classification (TKRC). As of date, more than 3000 Sowa Rigpa formulations have been transcribed into a Traditional Knowledge Digital Library database. In this paper, we are presenting India's Traditional Knowledge Digital Library for Traditional Tibetan medicine, and this database system helps to preserve and prevent the exploitation of Sowa Rigpa. Gradually it will be approved and accepted globally.Keywords: traditional Tibetan medicine, India's traditional knowledge digital library, traditional knowledge resources classification, international patent classification
Procedia PDF Downloads 1281512 A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.Keywords: pattern recognition, global terrorism database, Manhattan distance, k-means clustering, terrorism data analysis
Procedia PDF Downloads 3861511 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 3281510 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok
Authors: Supattra Kanchanopast
Abstract:
The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.Keywords: convenience store, the management information system, inventory management, 7 eleven shop
Procedia PDF Downloads 4821509 New Approaches for the Handwritten Digit Image Features Extraction for Recognition
Authors: U. Ravi Babu, Mohd Mastan
Abstract:
The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.Keywords: handwritten digit recognition, distance measure, MNIST database, image features
Procedia PDF Downloads 4611508 Thermochemical Study of the Degradation of the Panels of Wings in a Space Shuttle by Utilization of HSC Chemistry Software and Its Database
Authors: Ahmed Ait Hou
Abstract:
The wing leading edge and nose cone of the space shuttle are fabricated from a reinforced carbon/carbon material. This material attains its durability from a diffusion coating of silicon carbide (SiC) and a glass sealant. During re-entry into the atmosphere, this material is subject to an oxidizing high-temperature environment. The use of thermochemical calculations resulting at the HSC CHEMISTRY software and its database allows us to interpret the phenomena of oxidation and chloridation observed on the wing leading edge and nose cone of the space shuttle during its mission in space. First study is the monitoring of the oxidation reaction of SiC. It has been demonstrated that thermal oxidation of the SiC gives the two compounds SiO₂(s) and CO(g). In the extreme conditions of very low oxygen partial pressures and high temperatures, there is a reaction between SiC and SiO₂, leading to SiO(g) and CO(g). We had represented the phase stability diagram of Si-C-O system calculated by the use of the HSC Chemistry at 1300°C. The principal characteristic of this diagram of predominance is the line of SiC + SiO₂ coexistence. Second study is the monitoring of the chloridation reaction of SiC. The other problem encountered in addition to oxidation is the phenomenon of chloridation due to the presence of NaCl. Indeed, after many missions, the leading edge wing surfaces have exhibited small pinholes. We have used the HSC Chemistry database to analyze these various reactions. Our calculations concorde with the phenomena we announced in research work resulting in NASA LEWIS Research center.Keywords: thermochchemicals calculations, HSC software, oxidation and chloridation, wings in space
Procedia PDF Downloads 1231507 Risk of Heatstroke Occurring in Indoor Built Environment Determined with Nationwide Sports and Health Database and Meteorological Outdoor Data
Authors: Go Iwashita
Abstract:
The paper describes how the frequencies of heatstroke occurring in indoor built environment are related to the outdoor thermal environment with big statistical data. As the statistical accident data of heatstroke, the nationwide accident data were obtained from the National Agency for the Advancement of Sports and Health (NAASH) . The meteorological database of the Japanese Meteorological Agency supplied data about 1-hour average temperature, humidity, wind speed, solar radiation, and so forth. Each heatstroke data point from the NAASH database was linked to the meteorological data point acquired from the nearest meteorological station where the accident of heatstroke occurred. This analysis was performed for a 10-year period (2005–2014). During the 10-year period, 3,819 cases of heatstroke were reported in the NAASH database for the investigated secondary/high schools of the nine Japanese representative cities. Heatstroke most commonly occurred in the outdoor schoolyard at a wet-bulb globe temperature (WBGT) of 31°C and in the indoor gymnasium during athletic club activities at a WBGT > 31°C. The determined accident ratio (number of accidents during each club activity divided by the club’s population) in the gymnasium during the female badminton club activities was the highest. Although badminton is played in a gymnasium, these WBGT results show that the risk level during badminton under hot and humid conditions is equal to that of baseball or rugby played in the schoolyard. Except sports, the high risk of heatstroke was observed in schools houses during cultural activities. The risk level for indoor environment under hot and humid condition would be equal to that for outdoor environment based on the above results of WBGT. Therefore control measures against hot and humid indoor condition were needed as installing air conditions not only schools but also residences.Keywords: accidents in schools, club activity, gymnasium, heatstroke
Procedia PDF Downloads 2131506 Voice Liveness Detection Using Kolmogorov Arnold Networks
Authors: Arth J. Shah, Madhu R. Kamble
Abstract:
Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection
Procedia PDF Downloads 391505 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 4011504 Lipid from Activated Sludge as a Feedstock for the Production of Biodiesel
Authors: Ifeanyichukwu Edeh, Tim Overton, Steve Bowra
Abstract:
There is increasing interest in utilising low grade or waste biomass for the production of renewable bioenergy vectors i.e. waste to energy. In this study we have chosen to assess, activated sludge, which is a microbial biomass generated during the second stage of waste water treatment as a source of lipid for biodiesel production. To date a significant proportion of biodiesel is produced from used cooking oil and animal fats. It was reasoned that if activated sludge proved a viable feedstock it has the potential to support increase biodiesel production capacity. Activated sludge was obtained at different times of the year and from two different sewage treatment works in the UK. The biomass within the activated sludge slurry was recovered by filtration and the total weight of material calculated by combining the dry weight of the total suspended solid (TSS) and the total dissolved solid (TDS) fractions. Total lipids were extracted from the TSS and TDS using solvent extraction (Folch methods). The classes of lipids within the total lipid extract were characterised using high performance thin layer chromatography (HPTLC) by referencing known standards. The fatty acid profile and content of the lipid extract were determined using acid mediated-methanolysis to obtain fatty acid methyl esters (FAMEs) which were analysed by gas chromatography and HPTLC. The results showed that there were differences in the total biomass content in the activated sludge collected from different sewage works. Lipid yields from TSS obtained from both sewage treatment works differed according to the time of year (between 3.0 and 7.4 wt. %). The lipid yield varied slightly within the same source of biomass but more widely between the two sewage treatment works. The neutral lipid classes identified were acylglycerols, free fatty acids, sterols and wax esters while the phospholipid class included phosphatidylcholine, lysophosphatidycholine, phosphatidylethanolamine and phosphatidylinositol. The fatty acid profile revealed the presence of palmitic acid, palmitoleic acid, linoleic acid, oleic acid and stearic acid and that unsaturated fatty acids were the most abundant. Following optimisation, the FAME yield was greater than 10 wt. % which was required to have an economic advantage in biodiesel production.Keywords: activated sludge, biodiesel, lipid, methanolysis
Procedia PDF Downloads 4721503 Preserving Digital Arabic Text Integrity Using Blockchain Technology
Authors: Zineb Touati Hamad, Mohamed Ridda Laouar, Issam Bendib
Abstract:
With the massive development of technology today, the Arabic language has gained a prominent position among the languages most used for writing articles, expressing opinions, and also for citing in many websites, defying its growing sensitivity in terms of structure, language skills, diacritics, writing methods, etc. In the context of the spread of the Arabic language, the Holy Quran represents the most prevalent Arabic text today in many applications and websites for citation purposes or for the reading and learning rituals. The Quranic verses / surahs are published quickly and without cost, which may cause great concern to ensure the safety of the content from tampering and alteration. To protect the content of texts from distortion, it is necessary to refer to the original database and conduct a comparison process to extract the percentage of distortion. The disadvantage of this method is that it takes time, in addition to the lack of any guarantee on the integrity of the database itself as it belongs to one central party. Blockchain technology today represents the best way to maintain immutable content. Blockchain is a distributed database that stores information in blocks linked to each other through encryption, where the modification of each block can be easily known. To exploit these advantages, we seek in this paper to justify the use of this technique in preserving the integrity of Arabic texts sensitive to change by building a decentralized framework to authenticate and verify the integrity of the digital Quranic verses/surahs spread on websites.Keywords: arabic text, authentication, blockchain, integrity, quran, verification
Procedia PDF Downloads 1641502 A Geometric Based Hybrid Approach for Facial Feature Localization
Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik
Abstract:
Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.Keywords: biometrics, face recognition, facial landmarks, image processing
Procedia PDF Downloads 4121501 Big Brain: A Single Database System for a Federated Data Warehouse Architecture
Authors: X. Gumara Rigol, I. Martínez de Apellaniz Anzuola, A. Garcia Serrano, A. Franzi Cros, O. Vidal Calbet, A. Al Maruf
Abstract:
Traditional federated architectures for data warehousing work well when corporations have existing regional data warehouses and there is a need to aggregate data at a global level. Schibsted Media Group has been maturing from a decentralised organisation into a more globalised one and needed to build both some of the regional data warehouses for some brands at the same time as the global one. In this paper, we present the architectural alternatives studied and why a custom federated approach was the notable recommendation to go further with the implementation. Although the data warehouses are logically federated, the implementation uses a single database system which presented many advantages like: cost reduction and improved data access to global users allowing consumers of the data to have a common data model for detailed analysis across different geographies and a flexible layer for local specific needs in the same place.Keywords: data integration, data warehousing, federated architecture, Online Analytical Processing (OLAP)
Procedia PDF Downloads 2361500 Investigation of Regional Differences in Strong Ground Motions for the Iranian Plateau
Authors: Farhad Sedaghati, Shahram Pezeshk
Abstract:
Regional variations in strong ground motions for the Iranian Plateau have been investigated by using a simple statistical method called Analysis of Variance (ANOVA). In this respect, a large database consisting of 1157 records occurring within the Iranian Plateau with moment magnitudes of greater than or equal to 5 and Joyner-Boore distances up to 200 km has been considered. Geometric averages of horizontal peak ground accelerations (PGA) as well as 5% damped linear elastic response spectral accelerations (SA) at periods of 0.2, 0.5, 1.0, and 2.0 sec are used as strong motion parameters. The initial database is divided into two different datasets, for Northern Iran (NI) and Central and Southern Iran (CSI). The comparison between strong ground motions of these two regions reveals that there is no evidence for significant differences; therefore, data from these two regions may be combined to estimate the unknown coefficients of attenuation relationships.Keywords: ANOVA, attenuation relationships, Iranian Plateau, PGA, regional variation, SA, strong ground motion
Procedia PDF Downloads 3141499 Localization of Geospatial Events and Hoax Prediction in the UFO Database
Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi
Abstract:
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events
Procedia PDF Downloads 3761498 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes
Authors: Ritwik Dutta, Marylin Wolf
Abstract:
This paper describes the trade-offs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The back-end consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.Keywords: flask, Java, JavaScript, health monitoring, long-term care, Mongo, Python, smart home, software engineering, webserver
Procedia PDF Downloads 3901497 Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance
Authors: R. Ajgou, S. Sbaa, S. Ghendir, A. Chemsa, A. Taleb-Ahmed
Abstract:
The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC).Keywords: speech enhancement, pesq, speaker recognition, MFCC
Procedia PDF Downloads 4241496 A Study on Automotive Attack Database and Data Flow Diagram for Concretization of HEAVENS: A Car Security Model
Authors: Se-Han Lee, Kwang-Woo Go, Gwang-Hyun Ahn, Hee-Sung Park, Cheol-Kyu Han, Jun-Bo Shim, Geun-Chul Kang, Hyun-Jung Lee
Abstract:
In recent years, with the advent of smart cars and the expansion of the market, the announcement of 'Adventures in Automotive Networks and Control Units' at the DEFCON21 conference in 2013 revealed that cars are not safe from hacking. As a result, the HEAVENS model considering not only the functional safety of the vehicle but also the security has been suggested. However, the HEAVENS model only presents a simple process, and there are no detailed procedures and activities for each process, making it difficult to apply it to the actual vehicle security vulnerability check. In this paper, we propose an automated attack database that systematically summarizes attack vectors, attack types, and vulnerable vehicle models to prepare for various car hacking attacks, and data flow diagrams that can detect various vulnerabilities and suggest a way to materialize the HEAVENS model.Keywords: automotive security, HEAVENS, car hacking, security model, information security
Procedia PDF Downloads 3621495 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4671494 A Novel Probabilistic Spatial Locality of Reference Technique for Automatic Cleansing of Digital Maps
Authors: A. Abdullah, S. Abushalmat, A. Bakshwain, A. Basuhail, A. Aslam
Abstract:
GIS (Geographic Information System) applications require geo-referenced data, this data could be available as databases or in the form of digital or hard-copy agro-meteorological maps. These parameter maps are color-coded with different regions corresponding to different parameter values, converting these maps into a database is not very difficult. However, text and different planimetric elements overlaid on these maps makes an accurate image to database conversion a challenging problem. The reason being, it is almost impossible to exactly replace what was underneath the text or icons; thus, pointing to the need for inpainting. In this paper, we propose a probabilistic inpainting approach that uses the probability of spatial locality of colors in the map for replacing overlaid elements with underlying color. We tested the limits of our proposed technique using non-textual simulated data and compared text removing results with a popular image editing tool using public domain data with promising results.Keywords: noise, image, GIS, digital map, inpainting
Procedia PDF Downloads 3521493 Effectiveness and Efficiency of Unified Philippines Accident Reporting and Database System in Optimizing Road Crash Data Usage with Various Stakeholders
Authors: Farhad Arian Far, Anjanette Q. Eleazar, Francis Aldrine A. Uy, Mary Joyce Anne V. Uy
Abstract:
The Unified Philippine Accident Reporting and Database System (UPARDS), is a newly developed system by Dr. Francis Aldrine Uy of the Mapua Institute of Technology. The main purpose is to provide an advanced road accident investigation tool, record keeping and analysis system for stakeholders such as Philippine National Police (PNP), Metro Manila Development Authority (MMDA), Department of Public Works and Highways (DPWH), Department of Health (DOH), and insurance companies. The system is composed of 2 components, the mobile application for road accident investigators that takes advantage of available technology to advance data gathering and the web application that integrates all accident data for the use of all stakeholders. The researchers with the cooperation of PNP’s Vehicle Traffic Investigation Sector of the City of Manila, conducted the field-testing of the application in fifteen (15) accident cases. Simultaneously, the researchers also distributed surveys to PNP, Manila Doctors Hospital, and Charter Ping An Insurance Company to gather their insights regarding the web application. The survey was designed on information systems theory called Technology Acceptance Model. The results of the surveys revealed that the respondents were greatly satisfied with the visualization and functions of the applications as it proved to be effective and far more efficient in comparison with the conventional pen-and-paper method. In conclusion, the pilot study was able to address the need for improvement of the current system.Keywords: accident, database, investigation, mobile application, pilot testing
Procedia PDF Downloads 4421492 Semi Empirical Equations for Peak Shear Strength of Rectangular Reinforced Concrete Walls
Authors: Ali Kezmane, Said Boukais, Mohand Hamizi
Abstract:
This paper presents an analytical study on the behavior of reinforced concrete walls with rectangular cross section. Several experiments on such walls have been selected to be studied. Database from various experiments were collected and nominal shear wall strengths have been calculated using formulas, such as those of the ACI (American), NZS (New Zealand), Mexican (NTCC), and Wood and Barda equations. Subsequently, nominal shear wall strengths from the formulas were compared with the ultimate shear wall strengths from the database. These formulas vary substantially in functional form and do not account for all variables that affect the response of walls. There is substantial scatter in the predicted values of ultimate shear strength. Two new semi empirical equations are developed using data from tests of 57 walls for transitions walls and 27 for slender walls with the objective of improving the prediction of peak strength of walls with the most possible accurate.Keywords: shear strength, reinforced concrete walls, rectangular walls, shear walls, models
Procedia PDF Downloads 3431491 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis
Authors: Deepika Verma, Runa Sarkar
Abstract:
One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.Keywords: EBSCO database, lexicographic definition, social enterprise, text mining
Procedia PDF Downloads 3951490 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 2971489 Ultimate Strength Prediction of Shear Walls with an Aspect Ratio between One and Two
Authors: Said Boukais, Ali Kezmane, Kahil Amar, Mohand Hamizi, Hannachi Neceur Eddine
Abstract:
This paper presents an analytical study on the behavior of rectangular reinforced concrete walls with an aspect ratio between one and tow. Several experiments on such walls have been selected to be studied. Database from various experiments were collected and nominal wall strengths have been calculated using formulas, such as those of the ACI (American), NZS (New Zealand), Mexican (NTCC), and Wood equation for shear and strain compatibility analysis for flexure. Subsequently, nominal ultimate wall strengths from the formulas were compared with the ultimate wall strengths from the database. These formulas vary substantially in functional form and do not account for all variables that affect the response of walls. There is substantial scatter in the predicted values of ultimate strength. New semi empirical equation are developed using data from tests of 46 walls with the objective of improving the prediction of ultimate strength of walls with the most possible accuracy and for all failure modes.Keywords: prediction, ultimate strength, reinforced concrete walls, walls, rectangular walls
Procedia PDF Downloads 3371488 Computer Aided Classification of Architectural Distortion in Mammograms Using Texture Features
Authors: Birmohan Singh, V.K.Jain
Abstract:
Computer aided diagnosis systems provide vital opinion to radiologists in the detection of early signs of breast cancer from mammogram images. Masses and microcalcifications, architectural distortions are the major abnormalities. In this paper, a computer aided diagnosis system has been proposed for distinguishing abnormal mammograms with architectural distortion from normal mammogram. Four types of texture features GLCM texture, GLRLM texture, fractal texture and spectral texture features for the regions of suspicion are extracted. Support Vector Machine has been used as classifier in this study. The proposed system yielded an overall sensitivity of 96.47% and accuracy of 96% for the detection of abnormalities with mammogram images collected from Digital Database for Screening Mammography (DDSM) database.Keywords: architecture distortion, mammograms, GLCM texture features, GLRLM texture features, support vector machine classifier
Procedia PDF Downloads 491