Search results for: decision support filters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3147

Search results for: decision support filters

297 Optimized Facial Features-based Age Classification

Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park

Abstract:

The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.

Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015
296 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Authors: Zukisa Nante, Wang Zenghui

Abstract:

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 443
295 Application of a Theoretical Framework as a Context for a Travel Behavior Change Policy Intervention

Authors: F. Moghtaderi, M. Burke, J. Troelsen

Abstract:

There has been a significant decline in active travel and a massive increase in the use of car dependent travel in many countries during the past two decades. Evidential risks for people’s physical and mental health problems are correlated with this increased use of motorized travel. These health related problems range from overweight and obesity to increased air pollution. In response to these rising concerns health professionals, traffic planers, local authorities and others have introduced a variety of initiatives to counterbalance the dominance of cars for daily journeys. However, the nature of travel behavior change interventions, which aim to reduce car use, are very complex and challenging regarding their interactions with human behavior. To change travel behavior at least two aspects have to be taken into consideration. First, how to alter attitudes and perceptions toward the sustainable and healthy modes of travel, in competition with experiences of private car use. And second, how to make these behavior change processes irreversible and sustainable. There are no comprehensive models available to guide policy interventions to increase the level of success of travel behavior change interventions across both these dimensions. A comprehensive theoretical framework is required in the effort to optimize how to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding the gaps in the travel behavior change research literature, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning the implemented travel behavior change interventions. A structured mixed-method model is suggested to improve the analytic power of the results according to the complexity of human behavior. In order to recognize people’s attitudes towards a specific travel mode, the Theory of Planned Behavior (TPB) was operationalized. But in order to capture decision making processes the Transtheoretical model of Behavior Change (TTM) was also used. Consequently, the combination of these two theories (TTM and TPB) has resulted in a synthesis with appropriate concepts to identify and design an implemented travel behavior change interventions.

Keywords: Behavior change theories, Theoretical framework, Travel behavior change interventions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2839
294 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks

Authors: T. Sattarpour, D. Nazarpour

Abstract:

This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.

Keywords: Active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
293 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System

Authors: M. L. Anitha, K. A. Radhakrishna Rao

Abstract:

With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.

Keywords: Biometrics, hand geometry features, inner knuckle print, recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121
292 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service

Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.

Keywords: Medical image, QoS, simulated annealing, Tabu search, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
291 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments

Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein

Abstract:

Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.

Keywords: Virtual Reality, effective computing, effective VR, emotion-based effective physiological database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
290 Analysis of the Influence of Reshoring on the Structural Behavior of Reinforced Concrete Beams

Authors: Keith Danila Aquino Neves, Júlia Borges dos Santos

Abstract:

There is little published research about the influence of execution methods on structural behavior. Structural analysis is typically based on a constructed building, considering the actions of all forces under which it was designed. However, during construction, execution loads do not match those designed, and in some cases the loads begin to act when the concrete has not yet reached its maximum strength. Changes to structural element support conditions may occur, resulting in unforeseen alterations to the structure’s behavior. Shoring is an example of a construction process that, if executed improperly, will directly influence the structural performance, and may result in unpredicted cracks and displacements. The NBR 14931/2004 standard, which guides the execution of reinforced concrete structures, mentions that shoring must be executed in a way that avoids unpredicted loads and that it may be removed after previous analysis of the structure’s behavior by the professional responsible for the structure’s design. Differences in structural behavior are reduced for small spans. It is important to qualify and quantify how the incorrect placement of shores can compromise a structure’s safety. The results of this research allowed a more precise acknowledgment of the relationship between spans and loads, for which the influence of execution processes can be considerable, and reinforced that civil engineering practice must be performed with the presence of a qualified professional, respecting existing standards’ guidelines.

Keywords: Structural analysis, structural behavior, reshoring, static scheme, reinforced concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
289 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: Computational social science, movie preference, machine learning, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
288 Language Politics and Identity in Translation: From a Monolingual Text to Multilingual Text in Chinese Translations

Authors: Chu-Ching Hsu

Abstract:

This paper focuses on how the government-led language policies and the political changes in Taiwan manipulate the languages choice in translations and what translation strategies are employed by the translator to show his or her language ideology behind the power struggles and decision-making. Therefore, framed by Lefevere’s theoretical concept of translating as rewriting, and carried out a diachronic and chronological study, this paper specifically sets out to investigate the language ideology and translator’s idiolect of Chinese language translations of Anglo-American novels. The examples drawn to explore these issues were taken from different versions of Chinese renditions of Mark Twain’s English-language novel The Adventures of Huckleberry Finn in which there are several different dialogues originally written in the colloquial language and dialect used in the American state of Mississippi and reproduced in Mark Twain’s works. Also, adapted corpus methodology, many examples are extracted as instances from the translated texts and source text, to illuminate how the translators in Taiwan deal with the dialectal features encoded in Twain’s works, and how different versions of Chinese translations are employed by Taiwanese translators to confirm the language polices and to express their language identity textually in different periods of the past five decades, from the 1960s onward. The finding of this study suggests that the use of Taiwanese dialect and language patterns in translations does relate to the movement of the mother-tongue language and language ideology of the translator as well as to the issue of language identity raised in the island of Taiwan. Furthermore, this study confirms that the change of political power in Taiwan does bring significantly impact in language policy-- assimilationism, pluralism or multiculturalism, which also makes Taiwan from a monolingual to multilingual society, where the language ideology and identity can be revealed not only in people’s daily communication but also in written translations.

Keywords: Language politics and policies, literary translation, mother-tongue, multiculturalism, translator’s ideology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1072
287 Juxtaposing South Africa’s Private Sector and Its Public Service Regarding Innovation Diffusion, to Explore the Obstacles to E-Governance

Authors: Petronella Jonck, Freda van der Walt

Abstract:

Despite the benefits of innovation diffusion in the South African public service, implementation thereof seems to be problematic, particularly with regard to e-governance which would enhance the quality of service delivery, especially accessibility, choice, and mode of operation. This paper reports on differences between the public service and the private sector in terms of innovation diffusion. Innovation diffusion will be investigated to explore identified obstacles that are hindering successful implementation of e-governance. The research inquiry is underpinned by the diffusion of innovation theory, which is premised on the assumption that innovation has a distinct channel, time, and mode of adoption within the organisation. A comparative thematic document analysis was conducted to investigate organisational differences with regard to innovation diffusion. A similar approach has been followed in other countries, where the same conceptual framework has been used to guide document analysis in studies in both the private and the public sectors. As per the recommended conceptual framework, three organisational characteristics were emphasised, namely the external characteristics of the organisation, the organisational structure, and the inherent characteristics of the leadership. The results indicated that the main difference in the external characteristics lies in the focus and the clientele of the private sector. With regard to organisational structure, private organisations have veto power, which is not the case in the public service. Regarding leadership, similarities were observed in social and environmental responsibility and employees’ attitudes towards immediate supervision. Differences identified included risk taking, the adequacy of leadership development, organisational approaches to motivation and involvement in decision making, and leadership style. Due to the organisational differences observed, it is recommended that differentiated strategies be employed to ensure effective innovation diffusion, and ultimately e-governance. It is recommended that the results of this research be used to stimulate discussion on ways to improve collaboration between the mentioned sectors, to capitalise on the benefits of each sector.

Keywords: E-governance, ICT, innovation diffusion, comparative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
286 Authenticity of Lipid and Soluble Sugar Profiles of Various Oat Cultivars (Avena sativa)

Authors: Marijana M. Ačanski, Kristian A. Pastor, Djura N. Vujić

Abstract:

The identification of lipid and soluble sugar components in flour samples of different cultivars belonging to common oat species (Avena sativa L.) was performed: spring oat, winter oat and hulless oat. Fatty acids were extracted from flour samples with n-hexane, and derivatized into volatile methyl esters, using TMSH (trimethylsulfonium hydroxide in methanol). Soluble sugars were then extracted from defatted and dried samples of oat flour with 96% ethanol, and further derivatized into corresponding TMS-oximes, using hydroxylamine hydrochloride solution and BSTFA (N,O-bis-(trimethylsilyl)-trifluoroacetamide). The hexane and ethanol extracts of each oat cultivar were analyzed using GC-MS system. Lipid and simple sugar compositions are very similar in all samples of investigated cultivars. Chemometric tool was applied to numeric values of automatically integrated surface areas of detected lipid and simple sugar components in their corresponding derivatized forms. Hierarchical cluster analysis shows a very high similarity between the investigated flour samples of oat cultivars, according to the fatty acid content (0.9955). Moderate similarity was observed according to the content of soluble sugars (0.50). These preliminary results support the idea of establishing methods for oat flour authentication, and provide the means for distinguishing oat flour samples, regardless of the variety, from flour samples made of other cereal species, just by lipid and simple sugar profile analysis.

Keywords: Authentication, chemometrics, GC-MS, lipid and soluble sugar composition, oat cultivars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
285 Steps towards the Development of National Health Data Standards in Developing Countries: An Exploratory Qualitative Study in Saudi Arabia

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian R. Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: Interoperability, Case Study, Health Data Standards, Medical Data Exchange, Saudi Arabia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
284 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
283 Development System for Emotion Detection Based on Brain Signals and Facial Images

Authors: Suprijanto, Linda Sari, Vebi Nadhira , IGN. Merthayasa. Farida I.M

Abstract:

Detection of human emotions has many potential applications. One of application is to quantify attentiveness audience in order evaluate acoustic quality in concern hall. The subjective audio preference that based on from audience is used. To obtain fairness evaluation of acoustic quality, the research proposed system for multimodal emotion detection; one modality based on brain signals that measured using electroencephalogram (EEG) and the second modality is sequences of facial images. In the experiment, an audio signal was customized which consist of normal and disorder sounds. Furthermore, an audio signal was played in order to stimulate positive/negative emotion feedback of volunteers. EEG signal from temporal lobes, i.e. T3 and T4 was used to measured brain response and sequence of facial image was used to monitoring facial expression during volunteer hearing audio signal. On EEG signal, feature was extracted from change information in brain wave, particularly in alpha and beta wave. Feature of facial expression was extracted based on analysis of motion images. We implement an advance optical flow method to detect the most active facial muscle form normal to other emotion expression that represented in vector flow maps. The reduce problem on detection of emotion state, vector flow maps are transformed into compass mapping that represents major directions and velocities of facial movement. The results showed that the power of beta wave is increasing when disorder sound stimulation was given, however for each volunteer was giving different emotion feedback. Based on features derived from facial face images, an optical flow compass mapping was promising to use as additional information to make decision about emotion feedback.

Keywords: Multimodal Emotion Detection, EEG, Facial Image, Optical Flow, compass mapping, Brain Wave

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2258
282 Selecting Negative Examples for Protein-Protein Interaction

Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae

Abstract:

Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.

Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
281 A Framework for Enhancing Mobile Development Software for Rangsit University, Thailand

Authors: Thossaporn Thossansin

Abstract:

This paper presents the development of a mobile application for students at the Faculty of Information Technology, Rangsit University (RSU), Thailand. RSU upgrades an enrollment process by improving its information systems. Students can download the RSU APP easily in order to access the RSU substantial information. The reason of having a mobile application is to help students to access the system regardless of time and place. The objectives of this paper include: 1. To develop an application on iOS platform for those students at the Faculty of Information Technology, Rangsit University, Thailand. 2. To obtain the students’ perception towards the new mobile app. The target group is those from the freshman year till the senior year of the faculty of Information Technology, Rangsit University. The new mobile application, called as RSU APP, is developed by the department of Information Technology, Rangsit University. It contains useful features and various functionalities particularly on those that can give support to students. The core contents of the app consist of RSU’s announcement, calendar, events, activities, and ebook. The mobile app is developed on the iOS platform. The user satisfaction is analyzed from the interview data from 81 interviewees as well as a Google application like a Google form which 122 interviewees are involved. The result shows that users are satisfied with the application as they score it the most satisfaction level at 4.67 SD 0.52. The score for the question if users can learn and use the application quickly is high which is 4.82 SD 0.71. On the other hand, the lowest satisfaction rating is in the app’s form, apps lists, with the satisfaction level as 4.01 SD 0.45.

Keywords: Mobile application, development of mobile application, framework of mobile development, software development for mobile devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
280 Geophysical Investigation for Pre-Engineering Construction Works in Part of Ilorin, Northcentral Nigeria

Authors: O. Ologe, A. I. Augie

Abstract:

A geophysical investigation involving geoelectric depths sounding has been conducted as pre-foundation study in part of Ilorin, Nigeria. The area is underlain by the Precambrian basement complex rocks. 15 sounding stations were established along five traverses. The Vertical Electrical Sounding (VES) (three-five) conducted along each of the traverses was subjected to computer iteration using IP2Win software. Three -five subsurface geologic layers were delineated in the study area. These include the topsoil with resistivity and thickness values ranging from 103 Ωm-210 Ωm and 0 m-1 m; lateritic (117 Ωm-590 Ωm and 1 m-4.7 m); sandy clay (137 – 859 Ωm and 2.9 m – 4.3 m); weathered (60.5 Ωm to 2539 Ωm and 3,2 m-10 m) and fresh basement (2253-∞ and 7.1 m-∞) respectively. The resistivity pseudosection shows continuous high resistivity zone on the surface. Resistivity of this layer from depth 0-5 m varies from 300-800 Ωm along traverse 1 and 2. Hence, this layer is rated competent as it has the ability to support engineering structure. However, along traverse 1, very low resistive layer occurs between VES 5 and 15 with resistivity values ranging from 30 Ωm-70 Ωm. This layer was rated incompetent based on the competence rating. This study revealed the importance of geophysical survey as a pre-construction engineering survey at any civil engineering site since it can reliably evaluate the competence of the subsurface geomaterials.

Keywords: Competence rating, geoelectric, pseudosection, soil, vertical electrical sounding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
279 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong

Authors: A. Das, L. Hitihamillage, S. Moridpour

Abstract:

Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.

Keywords: Congestion, constraints, management, LOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920
278 Cultural Practices as a Coping Measure for Women who Terminated a Pregnancy in Adolescence: A Qualitative Study

Authors: Botshelo R. Sebola

Abstract:

Unintended pregnancy often results in pregnancy termination. Most countries have legalised the termination of a pregnancy and pregnant adolescents can visit designated clinics without their parents’ consent. In most African and Asian countries, certain cultural practices are performed following any form of childbirth, including abortion, and such practices are ingrained in societies. The aim of this paper was to understand how women who terminated a pregnancy during adolescence coped by embracing cultural practices. A descriptive multiple case study design was adopted for the study. In-depth, semi-structured interviews and reflective diaries were used for data collection. Participants were 13 women aged 20 to 35 years who had terminated a pregnancy in adolescence. Three women kept their soiled sanitary pads, burned them to ash and waited for the rainy season to scatter the ash in a flowing stream. This ritual was performed to appease the ancestors, ask them for forgiveness and as a send-off for the aborted foetus. Five women secretly consulted Sangoma (traditional healers) to perform certain rituals. Three women isolated themselves to perform herbal cleansings, and the last two chose not to engage in any sexual activity for one year, which led to the loss of their partners. This study offers a unique contribution to understanding the solitary journey of women who terminate a pregnancy. The study challenges healthcare professionals who work in clinics that offer pregnancy termination services to look beyond releasing the foetus to advocating and providing women with the necessary care and support in performing cultural practices.

Keywords: Adolescence, case study, cultural rituals, pregnancy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 285
277 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly

Authors: L. C. Moreira, M. Kauffman

Abstract:

This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.

Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 333
276 Modeling Spatial Distributions of Point and Nonpoint Source Pollution Loadings in the Great Lakes Watersheds

Authors: Chansheng He, Carlo DeMarchi

Abstract:

A physically based, spatially-distributed water quality model is being developed to simulate spatial and temporal distributions of material transport in the Great Lakes Watersheds of the U.S. Multiple databases of meteorology, land use, topography, hydrography, soils, agricultural statistics, and water quality were used to estimate nonpoint source loading potential in the study watersheds. Animal manure production was computed from tabulations of animals by zip code area for the census years of 1987, 1992, 1997, and 2002. Relative chemical loadings for agricultural land use were calculated from fertilizer and pesticide estimates by crop for the same periods. Comparison of these estimates to the monitored total phosphorous load indicates that both point and nonpoint sources are major contributors to the total nutrient loads in the study watersheds, with nonpoint sources being the largest contributor, particularly in the rural watersheds. These estimates are used as the input to the distributed water quality model for simulating pollutant transport through surface and subsurface processes to Great Lakes waters. Visualization and GIS interfaces are developed to visualize the spatial and temporal distribution of the pollutant transport in support of water management programs.

Keywords: Distributed Large Basin Runoff Model, Great LakesWatersheds, nonpoint source pollution, and point sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
275 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
274 Increasing Fishery Economic Added Value through Post Fishing Program: Cold Storage Program

Authors: Indrijuli Magsari Putri, Dicky R. Munaf

Abstract:

The purpose of this paper is to guide the effort in improving the economic added value of Indonesian fisheries product through post fishing program, which is cold storage program. Indonesia's fisheries potential has been acknowledged by the world. FAO (2009) stated that Indonesia is one of the tenth highest producers of fishery products in the world. Based on BPS (Statistics Indonesia data), the national fisheries production in 2011 reached 5.714 million tons, which 93.55% came from marine fisheries and 6.45% from open waters. Indonesian territory consist of 2/3 of Indonesian waters, has given enormous benefits for Indonesia, especially fishermen. To improve the economic level of fishermen requires efforts to develop fisheries business unit. On of the efforts is by improving the quality of products which are marketed in the regional and international levels. It is certainly need the support of the existence of various fishery facilities (infrastructure to superstructure), one of which is cold storage. Given the many benefits of cold storage as a means of processing of fishery resources, Indonesia Maritime Security Coordinating Board (IMSCB) as one of the maritime institutions for maritime security and safety, has a program to empower the coastal community through encourages the development of cold storage in the middle and lower fishery business unit. The development of cold storage facilities which able to run its maximum role requires synergistic efforts of various parties.

Keywords: Cold Storage, Fish, Regulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
273 Using SMS Mobile Technology to Assess the Mastery of Subject Content Knowledge of Science and Mathematics Teachers of Secondary Schools in Tanzania

Authors: Joel S. Mtebe, Aron Kondoro, Mussa M. Kissaka, Elia Kibga

Abstract:

Sub-Saharan Africa is described as the second fastest growing in mobile phone penetration in the world more than in the United States or the European Union. Mobile phones have been used to provide a lot of opportunities to improve people’s lives in the region such as in banking, marketing, entertainment, and paying for various bills such as water, TV, and electricity. However, the potential of mobile phones to enhance teaching and learning has not been explored. This study presents an experience of developing and delivering SMS based quiz questions used to assess mastery of subject content knowledge of science and mathematics secondary school teachers in Tanzania. The SMS quizzes were used as a follow up support mechanism to 500 teachers who participated in a project to upgrade subject content knowledge of teachers in science and mathematics subjects in Tanzania. Quizzes of 10-15 questions were sent to teachers each week for 8 weeks and the results were analyzed using SPSS. Results show that teachers who participated in chemistry and biology subjects have better performance compared to those who participated in mathematics and physics subjects. Teachers reported some challenges that led to poor performance, This research has several practical implications for those who are implementing or planning to use mobile phones in teaching and learning especially in rural secondary schools in sub-Saharan Africa.

Keywords: Mobile learning, e-learning, educational technologies, SMS, secondary education, assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
272 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
271 Applying Participatory Design for the Reuse of Deserted Community Spaces

Authors: Wei-Chieh Yeh, Yung-Tang Shen

Abstract:

The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.

Keywords: Participatory design, Deserted spaces, Community building, Reuse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
270 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures

Authors: Silvina Caíno-Lores, Jesús Carretero

Abstract:

Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.

Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
269 Design Optimization of Doubly Fed Induction Generator Performance by Differential Evolution

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generators (DFIG) due to their advantages like speed variation and four-quadrant operation, find its application in wind turbines. DFIG besides supplying power to the grid has to support reactive power (kvar) under grid voltage variations, should contribute minimum fault current during faults, have high efficiency, minimum weight, adequate rotor protection during crow-bar-operation from +20% to -20% of rated speed.  To achieve the optimum performance, a good electromagnetic design of DFIG is required. In this paper, a simple and heuristic global optimization – Differential Evolution has been used. Variables considered are lamination details such as slot dimensions, stack diameters, air gap length, and generator stator and rotor stack length. Two operating conditions have been considered - voltage and speed variations. Constraints included were reactive power supplied to the grid and limiting fault current and torque. The optimization has been executed separately for three objective functions - maximum efficiency, weight reduction, and grid fault stator currents. Subsequent calculations led to the conclusion that designs determined through differential evolution help in determining an optimum electrical design for each objective function.

Keywords: Design optimization, performance, doubly fed induction generators, DFIG, differential evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933
268 Alumina Supported Cu-Mn-Cr Catalysts for CO and VOCs Oxidation

Authors: Krasimir I. Ivanov, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Petya Ts. Petrova, Tatyana T. Tabakova

Abstract:

This work studies the effect of chemical composition on the activity and selectivity of γ–alumina supported CuO/ MnO2/Cr2O3 catalysts toward deep oxidation of CO, dimethyl ether (DME) and methanol. The catalysts were prepared by impregnation of the support with an aqueous solution of copper nitrate, manganese nitrate and CrO3 under different conditions. Thermal, XRD and TPR analysis were performed. The catalytic measurements of single compounds oxidation were carried out on continuous flow equipment with a four-channel isothermal stainless steel reactor. Flow-line equipment with an adiabatic reactor for simultaneous oxidation of all compounds under the conditions that mimic closely the industrial ones was used. The reactant and product gases were analyzed by means of on-line gas chromatographs. On the basis of XRD analysis it can be concluded that the active component of the mixed Cu-Mn-Cr/γ–alumina catalysts consists of at least six compounds – CuO, Cr2O3, MnO2, Cu1.5Mn1.5O4, Cu1.5Cr1.5O4 and CuCr2O4, depending on the Cu/Mn/Cr molar ratio. Chemical composition strongly influences catalytic properties, this influence being quite variable with regards to the different processes. The rate of CO oxidation rapidly decrease with increasing of chromium content in the active component while for the DME was observed the reverse trend. It was concluded that the best compromise are the catalysts with Cu/(Mn + Cr) molar ratio 1:5 and Mn/Cr molar ratio from 1:3 to 1:4.

Keywords: Copper-manganese-chromium oxide catalysts, CO, deep oxidation, volatile organic compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899