Search results for: adaptive cluster sampling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4679

Search results for: adaptive cluster sampling

4019 A Two-Pronged Truncated Deferred Sampling Plan for Log-Logistic Distribution

Authors: Braimah Joseph Odunayo, Jiju Gillariose

Abstract:

This paper is aimed at developing a sampling plan that uses information from precedent and successive lots for lot disposition with a pretention that the life-time of a particular product assumes a Log-logistic distribution. A Two-pronged Truncated Deferred Sampling Plan (TTDSP) for Log-logistic distribution is proposed when the testing is truncated at a precise time. The best possible sample sizes are obtained under a given Maximum Allowable Percent Defective (MAPD), Test Suspension Ratios (TSR), and acceptance numbers (c). A formula for calculating the operating characteristics of the proposed plan is also developed. The operating characteristics and mean-ratio values were used to measure the performance of the plan. The findings of the study show that: Log-logistic distribution has a decreasing failure rate; furthermore, as mean-life ratio increase, the failure rate reduces; the sample size increase as the acceptance number, test suspension ratios and maximum allowable percent defective increases. The study concludes that the minimum sample sizes were smaller, which makes the plan a more economical plan to adopt when cost and time of production are costly and the experiment being destructive.

Keywords: consumers risk, mean life, minimum sample size, operating characteristics, producers risk

Procedia PDF Downloads 140
4018 Simulation of a Control System for an Adaptive Suspension System for Passenger Vehicles

Authors: S. Gokul Prassad, S. Aakash, K. Malar Mohan

Abstract:

In the process to cope with the challenges faced by the automobile industry in providing ride comfort, the electronics and control systems play a vital role. The control systems in an automobile monitor various parameters, controls the performances of the systems, thereby providing better handling characteristics. The automobile suspension system is one of the main systems that ensure the safety, stability and comfort of the passengers. The system is solely responsible for the isolation of the entire automobile from harmful road vibrations. Thus, integration of the control systems in the automobile suspension system would enhance its performance. The diverse road conditions of India demand the need of an efficient suspension system which can provide optimum ride comfort in all road conditions. For any passenger vehicle, the design of the suspension system plays a very important role in assuring the ride comfort and handling characteristics. In recent years, the air suspension system is preferred over the conventional suspension systems to ensure ride comfort. In this article, the ride comfort of the adaptive suspension system is compared with that of the passive suspension system. The schema is created in MATLAB/Simulink environment. The system is controlled by a proportional integral differential controller. Tuning of the controller was done with the Particle Swarm Optimization (PSO) algorithm, since it suited the problem best. Ziegler-Nichols and Modified Ziegler-Nichols tuning methods were also tried and compared. Both the static responses and dynamic responses of the systems were calculated. Various random road profiles as per ISO 8608 standard are modelled in the MATLAB environment and their responses plotted. Open-loop and closed loop responses of the random roads, various bumps and pot holes are also plotted. The simulation results of the proposed design are compared with the available passive suspension system. The obtained results show that the proposed adaptive suspension system is efficient in controlling the maximum over shoot and the settling time of the system is reduced enormously.

Keywords: automobile suspension, MATLAB, control system, PID, PSO

Procedia PDF Downloads 294
4017 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets

Authors: Najmeh Abedzadeh, Matthew Jacobs

Abstract:

An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.

Keywords: IDS, imbalanced datasets, sampling algorithms, big data

Procedia PDF Downloads 328
4016 Distributing Complementary Food Supplement - Yingyangbao Reducing the Anemia in Young Children in a County of Sichuan Province after Wenchuan Earthquake

Authors: Lijuan Wang, Junsheng Huo, Jing Sun, Wenxian Li, Jian Huang, Lin Ling, Yiping Zhou, Chengyu Huang, Jifang Hu

Abstract:

Backgrounds and Objective: This study aimed to evaluate the impact of highly nutrient-dense complementary food supplement-Yingyangbao, at the time of 3 months after Wenchuan earthquake, on the anemia of young children in a county in Sichuan province. Methods: The young children aged 6-23 months in the county were fed one sachet Yingyangbao per day. Yingyangbao were distributed for 15 months for free. The children entering 6 months age would be included. The length, weight and hemoglobin of the children aged 6-29 months were assessed at baseline (n=257) and Yingyangbao intervention for 6 (n=218) and 15 months (n=253) by cluster sampling. Growth status has not been described in the paper. The analysis was conducted based on 6-11, 12-17, 18-23 and 24-29 months. Results: It showed that the hemoglobin concentration in each group among the 4 groups increased by 4.9, 6.4, 8.0, 9.5 g/L after 6 months and 12.7, 11.4, 16.7, 15.7 g/L after 15 months compared to the baseline, respectively. The total anemia prevalence in each group was significantly lower after 6 and 15 months than the baseline (P<0.001), except the 6-11 months group after 6 months because of fewer Yingyangbao consumption. Total moderate anemia rate decreased from 18.3% to 5.5% after 6 months, and kept decreasing to 0.8% after another 9 months. The hemoglobin concentration was significantly correlated with the amount of Yingyangbao consumption(P<0.001) The anemia rate was significantly different based on the Yingyangbao compliance (P<0.001). Conclusion: It was concluded that Yingyangbao which contains quality protein, vitamins and micronutrients intervened 15 months could be effective for the improvement of anemia of young children. The study provides the support that the application of the complementary food supplements to reduce the anemia of young children in the emergency of natural disaster.

Keywords: young children, anemia, nutrition intervention, complementary food supplements, Yingyangbao

Procedia PDF Downloads 526
4015 The Human Resources Management for the Temple in Northeastern Thailand

Authors: Routsukol Sunalai

Abstract:

This research purpose is to study and compare the administration of Buddhist monks at northeastern Thailand. The samples used in the study are the priest in the Northeast by simple random sampling for 190 sampling. The tools used in this study is questioner were created in the 40 question items. The statistics used for data analysis were percentage, average, and standard deviation. The research found that the human resources management for the Buddhist monks as a whole is moderate. But it was found that the highest average is the policy followed by the management information. The Buddhist monks aged less than 25 years old with the overall difference was not significant. The priests who are less than 10 years in the monk experience and the priest has long held in the position for 10 years are not different in the significant level.

Keywords: employee job-related outcomes, ethical institutionalization, quality of work life, stock exchange of Thailand

Procedia PDF Downloads 210
4014 Reflections of Nocturnal Librarian: Attaining a Work-Life Balance in a Mega-City of Lagos State Nigeria

Authors: Oluwole Durodolu

Abstract:

The rationale for this study is to explore the adaptive strategy that librarians adopt in performing night shifts in a mega-city like Lagos state. Maslach Burnout Theory would be used to measure the three proportions of burnout in understanding emotional exhaustion, depersonalisation, and individual accomplishment to scrutinise job-related burnout syndrome allied with longstanding, unsolved stress. The qualitative methodology guided by a phenomenological research paradigm, which is an approach that focuses on the commonality of real-life experience in a particular group, would be used, focus group discussion adopted as a method of data collection from library staff who are involved in night-shift. The participant for the focus group discussion would be selected using a convenience sampling technique in which staff at the cataloguing unit would be included in the sample because of the representative characteristics of the unit. This would be done to enable readers to understand phenomena as it is reasonable than from a remote perspective. The exploratory interviews which will be in focus group method to shed light on issues relating to security, housing, transportation, budgeting, energy supply, employee duties, time management, information access, and sustaining professional levels of service and how all these variables affect the productivity of all the 149 library staff and their work-life balance.

Keywords: nightshift, work-life balance, mega-city, academic library, Maslach Burnout Theory, Lagos State, University of Lagos

Procedia PDF Downloads 132
4013 The Quality of the Presentation Influence the Audience Perceptions

Authors: Gilang Maulana, Dhika Rahma Qomariah, Yasin Fadil

Abstract:

Purpose: This research meant to measure the magnitude of the influence of the quality of the presentation to the targeted audience perception in catching information presentation. Design/Methodology/Approach: This research uses a quantitative research method. The kind of data that uses in this research is the primary data. The population in this research are students the economics faculty of Semarang State University. The sampling techniques uses in this research is purposive sampling. The retrieving data uses questionnaire on 30 respondents. The data analysis uses descriptive analysis. Result: The quality of presentation influential positive against perception of the audience. This proved that the more qualified presentation will increase the perception of the audience. Limitation: Respondents were limited to only 30 people.

Keywords: quality of presentation, presentation, audience, perception, semarang state university

Procedia PDF Downloads 392
4012 Validation of the Recovery of House Dust Mites from Fabrics by Means of Vacuum Sampling

Authors: A. Aljohani, D. Burke, D. Clarke, M. Gormally, M. Byrne, G. Fleming

Abstract:

Introduction: House Dust Mites (HDMs) are a source of allergen particles embedded in textiles and furnishings. Vacuum sampling is commonly used to recover and determine the abundance of HDMs but the efficiency of this method is less than standardized. Here, the efficiency of recovery of HDMs was evaluated from home-associated textiles using vacuum sampling protocols.Methods/Approach: Living Mites (LMs) or dead Mites (DMs) House Dust Mites (Dermatophagoides pteronyssinus: FERA, UK) were separately seeded onto the surfaces of Smooth Cotton, Denim and Fleece (25 mites/10x10cm2 squares) and left for 10 minutes before vacuuming. Fabrics were vacuumed (SKC Flite 2 pump) at a flow rate of 14 L/min for 60, 90 or 120 seconds and the number of mites retained by the filter (0.4μm x 37mm) unit was determined. Vacuuming was carried out in a linear direction (Protocol 1) or in a multidirectional pattern (Protocol 2). Additional fabrics with LMs were also frozen and then thawed, thereby euthanizing live mites (now termed EMs). Results/Findings: While there was significantly greater (p=0.000) recovery of mites (76% greater) in fabrics seeded with DMs than LMs irrespective of vacuuming protocol or fabric type, the efficiency of recovery of DMs (72%-76%) did not vary significantly between fabrics. For fabrics containing EMs, recovery was greatest for Smooth Cotton and Denim (65-73% recovered) and least for Fleece (15% recovered). There was no significant difference (p=0.99) between the recovery of mites across all three mite categories from Smooth Cotton and Denim but significantly fewer (p=0.000) mites were recovered from Fleece. Scanning Electron Microscopy images of HMD-seeded fabrics showed that live mites burrowed deeply into the Fleece weave which reduced their efficiency of recovery by vacuuming. Research Implications: Results presented here have implications for the recovery of HDMs by vacuuming and the choice of fabric to ameliorate HDM-dust sensitization.

Keywords: allergy, asthma, dead, fabric, fleece, live mites, sampling

Procedia PDF Downloads 139
4011 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 297
4010 Climate Adaptive Building Shells for Plus-Energy-Buildings, Designed on Bionic Principles

Authors: Andreas Hammer

Abstract:

Six peculiar architecture designs from the Frankfurt University will be discussed within this paper and their future potential of the adaptable and solar thin-film sheets implemented facades will be shown acting and reacting on climate/solar changes of their specific sites. The different aspects, as well as limitations with regard to technical and functional restrictions, will be named. The design process for a “multi-purpose building”, a “high-rise building refurbishment” and a “biker’s lodge” on the river Rheine valley, has been critically outlined and developed step by step from an international studentship towards an overall energy strategy, that firstly had to push the design to a plus-energy building and secondly had to incorporate bionic aspects into the building skins design. Both main parameters needed to be reviewed and refined during the whole design process. Various basic bionic approaches have been given [e.g. solar ivyᵀᴹ, flectofinᵀᴹ or hygroskinᵀᴹ, which were to experiment with, regarding the use of bendable photovoltaic thin film elements being parts of a hybrid, kinetic façade system.

Keywords: bionic and bioclimatic design, climate adaptive building shells [CABS], energy-strategy, harvesting façade, high-efficiency building skin, photovoltaic in building skins, plus-energy-buildings, solar gain, sustainable building concept

Procedia PDF Downloads 430
4009 Biosignal Recognition for Personal Identification

Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor

Abstract:

A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.

Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification

Procedia PDF Downloads 280
4008 The Impact of Autonomous Driving on Cities of the Future: A Literature Review

Authors: Maximilian A. Richter

Abstract:

The public authority needs to understand the role and impacts of autonomous vehicle (AV) on the mobility system. At present, however, research shows that the impact of AV on cities varies. As a consequence, it is difficult to make recommendations to policymakers on how they should prepare for the future when so much remains unknown about this technology. The study aims to provide an overview of the literature on how autonomous vehicles will affect the cities and traffic of the future. To this purpose, the most important studies are first selected, and their results summarized. Further on, it will be clarified which advantages AV have for cities and how it can lead to an improvement in the current problems/challenges of cities. To achieve the research aim and objectives, this paper approaches a literature review. For this purpose, in a first step, the most important studies are extracted. This is limited to studies that are peer-reviewed and have been published in high-ranked journals such as the Journal of Transportation: Part A. In step 2, the most important key performance indicator (KPIs) (such as traffic volume or energy consumption) are selected from the literature research. Due to the fact that different terms are used in the literature for similar statements/KPIs, these must first be clustered. Furthermore, for each cluster, the changes from the respective studies are compiled, as well as their survey methodology. In step 3, a sensitivity analysis per cluster is made. Here, it will be analyzed how the different studies come to their findings and on which assumptions, scenarios, and methods these calculations are based. From the results of the sensitivity analysis, the success factors for the implementation of autonomous vehicles are drawn, and statements are made under which conditions AVs can be successful.

Keywords: autonomous vehicles, city of the future, literature review, traffic simulations

Procedia PDF Downloads 106
4007 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 77
4006 Phonological Characteristics of Severe to Profound Hearing Impaired Children

Authors: Akbar Darouie, Mamak Joulaie

Abstract:

In regard of phonological skills development importance and its influence on other aspects of language, this study has been performed. Determination of some phonological indexes in children with hearing impairment and comparison with hearing children was the objective. A sample of convenience was selected from a rehabilitation center and a kindergarten in Karaj, Iran. Participants consisted of 12 hearing impaired and 12 hearing children (age range: 5 years and 6 months to 6 years and 6 months old). Hearing impaired children suffered from severe to profound hearing loss while three of them were cochlear implanted and the others were wearing hearing aids. Conversational speech of these children was recorded and 50 first utterances were selected to analyze. Percentage of consonant correct (PCC) and vowel correct (PVC), initial and final consonant omission error, cluster consonant omission error and syllabic structure variety were compared in two groups. Data were analyzed with t test (version 16th SPSS). Comparison between PCC and PVC averages in two groups showed a significant difference (P< 0/01). There was a significant difference about final consonant emission error (P<0/001) and initial consonant emission error (P<0/01) too. Also, the differences between two groups on cluster consonant omission were significant (P<0/001). Therefore, some changes were seen in syllabic structures in children with hearing impairment compared to typical group. This study demonstrates some phonological differences in Farsi language between two groups of children. Therefore, it seems, in clinical practices we must notice this issue.

Keywords: hearing impairment, phonology, vowel, consonant

Procedia PDF Downloads 244
4005 Non-Coplanar Nuclei in Heavy-Ion Reactions

Authors: Sahila Chopra, Hemdeep, Arshdeep Kaur, Raj K. Gupta

Abstract:

In recent times, we noticed an interesting and important role of non-coplanar degree-of-freedom (Φ = 00) in heavy ion reactions. Using the dynamical cluster-decay model (DCM) with Φ degree-of-freedom included, we have studied three compound systems 246Bk∗, 164Yb∗ and 105Ag∗. Here, within the DCM with pocket formula for nuclear proximity potential, we look for the effects of including compact, non-coplanar configurations (Φc = 00) on the non-compound nucleus (nCN) contribution in total fusion cross section σfus. For 246Bk∗, formed in 11B+235U and 14N+232Th reaction channels, the DCM with coplanar nuclei (Φc = 00) shows an nCN contribution for 11B+235U channel, but none for 14N+232Th channel, which on including Φ gives both reaction channels as pure compound nucleus decays. In the case of 164Yb∗, formed in 64Ni+100Mo, the small nCN effects for Φ=00 are reduced to almost zero for Φ = 00. Interestingly, however, 105Ag∗ for Φ = 00 shows a small nCN contribution, which gets strongly enhanced for Φ = 00, such that the characteristic property of PCN presents a change of behaviour, like that of a strongly fissioning superheavy element to a weakly fissioning nucleus; note that 105Ag∗ is a weakly fissioning nucleus and Psurv behaves like one for a weakly fissioning nucleus for both Φ = 00 and Φ = 00. Apparently, Φ is presenting itself like a good degree-of-freedom in the DCM.

Keywords: dynamical cluster-decay model, fusion cross sections, non-compound nucleus effects, non-coplanarity

Procedia PDF Downloads 302
4004 A Case Study on the Drivers of Household Water Consumption for Different Socio-Economic Classes in Selected Communities of Metro Manila, Philippines

Authors: Maria Anjelica P. Ancheta, Roberto S. Soriano, Erickson L. Llaguno

Abstract:

The main purpose of this study is to examine whether there is a significant relationship between socio-economic class and household water supply demand, through determining or verifying the factors governing water use consumption patterns of households from a sampling from different socio-economic classes in Metro Manila, the national capital region of the Philippines. This study is also an opportunity to augment the lack of local academic literature due to the very few publications on urban household water demand after 1999. In over 600 Metro Manila households, a rapid survey was conducted on their average monthly water consumption and habits on household water usage. The questions in the rapid survey were based on an extensive review of literature on urban household water demand. Sample households were divided into socio-economic classes A-B and C-D. Cluster analysis, dummy coding and outlier tests were done to prepare the data for regression analysis. Subsequently, backward stepwise regression analysis was used in order to determine different statistical models to describe the determinants of water consumption. The key finding of this study is that the socio-economic class of a household in Metro Manila is a significant factor in water consumption. A-B households consume more water in contrast to C-D families based on the mean average water consumption for A-B and C-D households are 36.75 m3 and 18.92 m3, respectively. The most significant proxy factors of socio-economic class that were related to household water consumption were examined in order to suggest improvements in policy formulation and household water demand management.

Keywords: household water uses, socio-economic classes, urban planning, urban water demand management

Procedia PDF Downloads 302
4003 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.

Keywords: clustering, k-mers, longest common subsequence, SOM

Procedia PDF Downloads 267
4002 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.

Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection

Procedia PDF Downloads 290
4001 Heritability and Diversity Analysis of Blast Resistant Upland Rice Genotypes Based on Quantitative Traits

Authors: Mst. Tuhina-Khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Md. Aktar-Uz-Zaman, Mahbod Sahebi

Abstract:

Rice is a staple crop of economic importance of most Asian people, and blast is the major constraints for its higher yield. Heritability of plants traits helps plant breeders to make an appropriate selection and to assess the magnitude of genetic improvement through hybridization. Diversity of crop plants is necessary to manage the continuing genetic erosion and address the issues of genetic conservation for successfully meet the future food requirements. Therefore, an experiment was conducted to estimate heritability and to determine the diversity of 27 blast resistant upland rice genotypes based on 18 quantitative traits using randomized complete block design. Heritability value was found to vary from 38 to 93%. The lowest heritability belonged to the character total number of tillers/plant (38%). In contrast, number of filled grains/panicle, and yield/plant (g) was recorded for their highest heritability value viz. 93 and 91% correspondingly. Cluster analysis based on 18 traits grouped 27 rice genotypes into six clusters. Cluster I was the biggest, which comprised 17 genotypes, accounted for about 62.96% of total population. The multivariate analysis suggested that the genotype ‘Chokoto 14’ could be hybridized with ‘IR 5533-55-1-11’ and ‘IR 5533-PP 854-1’ for broadening the gene pool of blast resistant upland rice germplasms for yield and other favorable characters.

Keywords: blast resistant, diversity analysis, heritability, upland rice

Procedia PDF Downloads 369
4000 A New Approach to the Digital Implementation of Analog Controllers for a Power System Control

Authors: G. Shabib, Esam H. Abd-Elhameed, G. Magdy

Abstract:

In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.

Keywords: PSS, power system stabilizer PID, proportional-integral-derivative PIM, plant input mapping

Procedia PDF Downloads 505
3999 Computational Analysis of Variation in Thrust of Oblique Detonation Ramjet Engine With Adaptive Inlet

Authors: Aditya, Ganapati Joshi, Vinod Kumar

Abstract:

IN THE MODERN-WARFARE ERA, THE PRIME REQUIREMENT IS A HIGH SPEED AND MACH NUMBER. WHEN THE MISSILES STRIKE IN THE HYPERSONIC REGIME THE OPPONENT CAN DETECT IT WITH THE ANTI-DEFENSE SYSTEM BUT CAN NOT STOP IT FROM CAUSING DAMAGE. SO, TO ACHIEVE THE SPEEDS OF THIS LEVEL THERE ARE TWO ENGINES THAT ARE AVAILABLE WHICH CAN WORK IN THIS REGION ARE RAMJET AND SCRAMJET. THE PROBLEM WITH RAMJET STARTS TO OCCUR WHEN MACH NUMBER EXCEEDS 4 AS THE STATIC PRESSURE AT THE INLET BECOMES EQUAL TO THE EXIT PRESSURE. SO, SCRAMJET ENGINE DEALS WITH THIS PROBLEM AS IT NEARLY HAS THE SAME WORKING BUT HERE THE FLOW IS NOT MUCH SLOWED DOWN AS COMPARED TO RAMJET IN THE DIFFUSER BUT IT SUFFERS FROM THE PROBLEMS SUCH AS INLET BUZZ, THERMAL CHOCKING, MIXING OF FUEL AND OXIDIZER, THERMAL HEATING, AND MANY MORE. HERE THE NEW ENGINE IS DEVELOPED ON THE SAME PRINCIPLE AS THE SCRAMJET ENGINE BUT BURNING HAPPENS DUE TO DETONATION INSTEAD OF DEFLAGRATION. THE PROBLEM WITH THE ENGINE STARTS WHEN THE MACH NUMBER BECOMES VARIABLE AND THE INLET GEOMETRY IS FIXED AND THIS LEADS TO INLET SPILLAGE WHICH WILL AFFECT THE THRUST ADVERSELY. SO, HERE ADAPTIVE INLET IS MADE OF SHAPE MEMORY ALLOYS WHICH WILL ENHANCE THE INLET MASS FLOW RATE AS WELL AS THRUST.

Keywords: detonation, ramjet engine, shape memory alloy, ignition delay, shock-boundary layer interaction, eddy dissipation, asymmetric nozzle

Procedia PDF Downloads 102
3998 Two-Phase Sampling for Estimating a Finite Population Total in Presence of Missing Values

Authors: Daniel Fundi Murithi

Abstract:

Missing data is a real bane in many surveys. To overcome the problems caused by missing data, partial deletion, and single imputation methods, among others, have been proposed. However, problems such as discarding usable data and inaccuracy in reproducing known population parameters and standard errors are associated with them. For regression and stochastic imputation, it is assumed that there is a variable with complete cases to be used as a predictor in estimating missing values in the other variable, and the relationship between the two variables is linear, which might not be realistic in practice. In this project, we estimate population total in presence of missing values in two-phase sampling. Instead of regression or stochastic models, non-parametric model based regression model is used in imputing missing values. Empirical study showed that nonparametric model-based regression imputation is better in reproducing variance of population total estimate obtained when there were no missing values compared to mean, median, regression, and stochastic imputation methods. Although regression and stochastic imputation were better than nonparametric model-based imputation in reproducing population total estimates obtained when there were no missing values in one of the sample sizes considered, nonparametric model-based imputation may be used when the relationship between outcome and predictor variables is not linear.

Keywords: finite population total, missing data, model-based imputation, two-phase sampling

Procedia PDF Downloads 131
3997 Layouting for Phase II of New Priok Project Using Adaptive Port Planning Frameworks

Authors: Mustarakh Gelfi, Poonam Taneja, Tiedo Vellinga, Delon Hamonangan

Abstract:

The initial masterplan of New Priok in the Port of Tanjung Priok was developed in 2012 is being updated to cater to new developments and new demands. In the new masterplan (2017), Phase II of development will start from 2035-onwards, depending on the future conditions. This study is about creating a robust masterplan for Phase II, which will remain functional under future uncertainties. The methodology applied in this study is scenario-based planning in the framework of Adaptive Port Planning (APP). Scenario-based planning helps to open up the perspective of the future as a horizon of possibilities. The scenarios are built around two major uncertainties in a 2x2 matrix approach. The two major uncertainties for New Priok port are economics and sustainability awareness. The outcome is four plausible scenarios: Green Port, Business As Usual, Moderate Expansion, and No Expansion. Terminal needs in each scenario are analyzed through traffic analysis and identifying the key cargos and commodities. In conclusion, this study gives the wide perspective for Port of Tanjung Priok for the planning Phase II of the development. The port has to realize that uncertainties persevere and are very likely to influence the decision making as to the future layouts. Instead of ignoring uncertainty, the port needs to make the action plans to deal with these uncertainties.

Keywords: Indonesia Port, port's layout, port planning, scenario-based planning

Procedia PDF Downloads 534
3996 Cloning and Expression of Human Interleukin 15: A Promising Candidate for Cytokine Immunotherapy

Authors: Sadaf Ilyas

Abstract:

Recombinant cytokines have been employed successfully as potential therapeutic agent. Some cytokine therapies are already used as a part of clinical practice, ranging from early exploratory trials to well established therapies that have already received approval. Interleukin 15 is a pleiotropic cytokine having multiple roles in peripheral innate and adaptive immune cell function. It regulates the activation, proliferation and maturation of NK cells, T-cells, monocytes/macrophages and granulocytes, and the interactions between them thus acting as a bridge between innate and adaptive immune responses. Unraveling the biology of IL-15 has revealed some interesting surprises that may point toward some of the first therapeutic applications for this cytokine. In this study, the human interleukin 15 gene was isolated, amplified and ligated to a TA vector which was then transfected to a bacterial host, E. coli Top10F’. The sequence of cloned gene was confirmed and it showed 100% homology with the reported sequence. The confirmed gene was then subcloned in pET Expression system to study the IPTG induced expression of IL-15 gene. Positive expression was obtained for number of clones that showed 15 kd band of IL-15 in SDS-PAGE analysis, indicating the successful strain development that can be studied further to assess the potential therapeutic intervention of this cytokine in relevance to human diseases.

Keywords: Interleukin 15, pET expression system, immune therapy, protein purification

Procedia PDF Downloads 413
3995 Air Pollution from Volatile Metals and Acid Gases

Authors: F. Ait Ahsene-Aissat, Y. Kerchiche, Y. Moussaoui, M. Hachemi

Abstract:

Environmental pollution is at the heart of the debate today, the pollutants released into the atmosphere must be measured and reduced to the norms of international releases. The industries pollution is caused by emissions of SO₂, CO and heavy metals in volatile form that must be quantified and monitored. This study presents a qualitative and quantitative analysis However, the collection of volatile heavy metals were performed by active sampling using an isokinetic. SO₂ gas for the maximum is reached for a value of 343 mg / m³, the SO₂ concentration far exceeds the standard releases SO₂ followed by incineration industries in Algeria. the concentration of Cr exceeds 8 times the standard, the Pb concentration in the excess of 6 times, the concentration of Fe has reached very high values exceeding the standard 30 times, the Zn concentration in the excess of 5 times, and the Ni the excess of 4 times and finally that of Cu is almost double of the standard.

Keywords: SO₂, CO, volatiles metals, active sampling isokinetic

Procedia PDF Downloads 297
3994 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 393
3993 RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System

Authors: Kay Thinzar Phu, Lwin Lwin Oo

Abstract:

In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset.

Keywords: adaptive thresholding based on RGB color, blue color detection, feature extraction, yellow color detection

Procedia PDF Downloads 313
3992 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 248
3991 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 151
3990 Vine Growers' Climate Change Adaptation Strategies in Hungary

Authors: Gabor Kiraly

Abstract:

Wine regions are based on equilibria between climate, soil, grape varieties, and farming expertise that define the special character and quality of local vine farming and wine production. Changes in climate conditions may increase risk of destabilizing this equilibrium. Adaptation decisions, including adjusting practices, processes and capitals in response to climate change stresses – may reduce this risk. However, farmers’ adaptive behavior are subject to a wide range of factors and forces such as links between climate change implications and production, farm - scale adaptive capacity and other external forces that might hinder them to make efficient response to climate change challenges. This paper will aim to study climate change adaptation practices and strategies of grape growers in a way of applying a complex and holistic approach involving theories, methods and tools both from environmental and social sciences. It will introduce the field of adaptation studies as an evidence - based discourse by presenting an overview of examples from wine regions where adaptation studies have already reached an advanced stage. This will serve as a theoretical background for a preliminary research with the aim to examine the feasibility and applicability of such a research approach in the Hungarian context.

Keywords: climate change, adaptation, viticulture, Hungary

Procedia PDF Downloads 237