Search results for: relative intensity noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5041

Search results for: relative intensity noise

3751 Construction of the Large Scale Biological Networks from Microarrays

Authors: Fadhl Alakwaa

Abstract:

One of the sustainable goals of the system biology is understanding gene-gene interactions. Hence, gene regulatory networks (GRN) need to be constructed for understanding the disease ontology and to reduce the cost of drug development. To construct gene regulatory from gene expression we need to overcome many challenges such as data denoising and dimensionality. In this paper, we develop an integrated system to reduce data dimension and remove the noise. The generated network from our system was validated via available interaction databases and was compared to previous methods. The result revealed the performance of our proposed method.

Keywords: gene regulatory network, biclustering, denoising, system biology

Procedia PDF Downloads 228
3750 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 357
3749 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 215
3748 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 217
3747 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 136
3746 Seismic Behaviour of RC Knee Joints in Closing and Opening Actions

Authors: S. Mogili, J. S. Kuang, N. Zhang

Abstract:

Knee joints, the beam column connections found at the roof level of a moment resisting frame buildings, are inherently different from conventional interior and exterior beam column connections in the way that forces from adjoining members are transferred into joint and then resisted by the joint. A knee connection has two distinct load resisting mechanisms, each for closing and opening actions acting simultaneously under reversed cyclic loading. In spite of many distinct differences in the behaviour of shear resistance in knee joints, there are no special design provisions in the major design codes available across the world due to lack of in-depth research on the knee connections. To understand the relative importance of opening and closing actions in design, it is imperative to study knee joints under varying shear stresses, especially at higher opening-to-closing shear stress ratios. Three knee joint specimens, under different input shear stresses, were designed to produce a varying ratio of input opening to closing shear stresses. The design was carried out in such a way that the ratio of flexural strength of beams with consideration of axial forces in opening to closing actions are maintained at 0.5, 0.7, and 1.0, thereby resulting in the required variation of opening to closing joint shear stress ratios among the specimens. The behaviour of these specimens was then carefully studied in terms of closing and opening capacities, hysteretic behaviour, and envelope curves to understand the differences in joint performance based on which an attempt to suggest design guidelines for knee joints is made emphasizing the relative importance of opening and closing actions. Specimens with relatively higher opening stresses were observed to be more vulnerable under the action of seismic loading.

Keywords: Knee-joints, large-scale testing, opening and closing shear stresses, seismic performance

Procedia PDF Downloads 214
3745 The Inherent Flaw in the NBA Playoff Structure

Authors: Larry Turkish

Abstract:

Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.

Keywords: alignment, mediocrity, regression, z-score

Procedia PDF Downloads 126
3744 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 139
3743 Evaluation of Entomopathogenic Fungi Strains for Field Persistence and Its Relationship to in Vitro Heat Tolerance

Authors: Mulue Girmay Gebreslasie

Abstract:

Entomopathogenic fungi are naturally safe and eco-friendly biological agents. Their potential of host specificity and ease handling made them appealing options to substitute synthetic pesticides in pest control programs. However, they are highly delicate and unstable under field conditions. Therefore, the current experiment was held to search out persistent fungal strains by defining the relationship between invitro heat tolerance and field persistence. Current results on leaf and soil persistence assay revealed that strains of Metarhizium species, M. pingshaense (F2685), M. pingshaense (MS2) and M. brunneum (F709) exhibit maximum cumulative CFUs count, relative survival rate and least percent of CFUs reductions showed significant difference at 7 days and 28 days post inoculations (dpi) in hot seasons from sampled soils and leaves and in cold season from soil samples. Whereas relative survival of B. brongniartii (TNO6) found significantly higher in cold weather leaf treatment application as compared to hot season and found as persistent as other fungal strains, while higher deterioration of fungal conidia seen with M. pingshaense (MS2). In the current study, strains of Beauveria brongniartii (TNO6) and Cordyceps javanica (Czy-LP) were relatively vulnerable in field condition with utmost colony forming units (CFUs) reduction and least survival rates. Further, the relationship of the two parameters (heat tolerance and field persistence) was seen with strong linear positive correlations elucidated that heat test could be used in selection of field persistent fungal strains for hot season applications.

Keywords: integrated pest management, biopesticides, Insect pathology and microbial control, entomology

Procedia PDF Downloads 87
3742 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 285
3741 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 243
3740 Musical Notation Reading versus Alphabet Reading-Comparison and Implications for Teaching Music Reading to Students with Dyslexia

Authors: Ora Geiger

Abstract:

Reading is a cognitive process of deciphering visual signs to produce meaning. During the reading process, written information of symbols and signs is received in the person’s eye and processed in the brain. This definition is relevant to both the reading of letters and the reading of musical notation. But while the letters of the alphabet are signs determined arbitrarily, notes are recorded systematically on a staff, with the location of each note on the staff indicating its relative pitch. In this paper, the researcher specifies the characteristics of alphabet reading in comparison to musical notation reading, and discusses the question whether a person diagnosed with dyslexia will necessarily have difficulty in reading musical notes. Dyslexia is a learning disorder that makes it difficult to acquire alphabet-reading skills due to difficulties expressed in the identification of letters, spelling, and other language deciphering skills. In order to read, one must be able to connect a symbol with a sound and to join the sounds into words. A person who has dyslexia finds it difficult to translate a graphic symbol into the sound that it represents. When teaching reading to children diagnosed with dyslexia, the multi-sensory approach, supporting the activation and involvement of most of the senses in the learning process, has been found to be particularly effective. According to this approach, when most senses participate in the reading learning process, it becomes more effective. During years of experience, the researcher, who is a music specialist, has been following the music reading learning process of elementary school age students, some of them diagnosed with Dyslexia, while studying to play soprano (descant) recorder. She argues that learning music reading while studying to play a musical instrument is a multi-sensory experience by its nature. The senses involved are: sight, hearing, touch, and the kinesthetic sense (motion), which provides the brain with information on the relative positions of the body. In this way, the learner experiences simultaneously visual, auditory, tactile, and kinesthetic impressions. The researcher concludes that there should be no contra-indication for teaching standard music reading to children with dyslexia if an appropriate process is offered. This conclusion is based on two main characteristics of music reading: (1) musical notation system is a systematic, logical, relative set of symbols written on a staff; and (2) music reading learning connected with playing a musical instrument is by its nature a multi-sensory activity since it combines sight, hearing, touch, and movement. This paper describes music reading teaching procedures and provides unique teaching methods that have been found to be effective for students who were diagnosed with Dyslexia. It provides theoretical explanations in addition to guidelines for music education practices.

Keywords: alphabet reading, dyslexia, multisensory teaching method, music reading, recorder playing

Procedia PDF Downloads 358
3739 Numerical Simulation of Laser ‎Propagation through Turbulent ‎Atmosphere Using Zernike ‎Polynomials

Authors: Mohammad Moradi ‎

Abstract:

In this article, propagation of a laser beam through turbulent ‎atmosphere is evaluated. At first the laser beam is simulated and then ‎turbulent atmosphere will be simulated by using Zernike polynomials. ‎Some parameter like intensity, PSF will be measured for four ‎wavelengths in different Cn2.

Keywords: laser beam propagation, phase screen, turbulent atmosphere, Zernike ‎polynomials

Procedia PDF Downloads 507
3738 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 142
3737 Open Reading Frame Marker-Based Capacitive DNA Sensor for Ultrasensitive Detection of Escherichia coli O157:H7 in Potable Water

Authors: Rehan Deshmukh, Sunil Bhand, Utpal Roy

Abstract:

We report the label-free electrochemical detection of Escherichia coli O157:H7 (ATCC 43895) in potable water using a DNA probe as a sensing molecule targeting the open reading frame marker. Indium tin oxide (ITO) surface was modified with organosilane and, glutaraldehyde was applied as a linker to fabricate the DNA sensor chip. Non-Faradic electrochemical impedance spectroscopy (EIS) behavior was investigated at each step of sensor fabrication using cyclic voltammetry, impedance, phase, relative permittivity, capacitance, and admittance. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed significant changes in surface topographies of DNA sensor chip fabrication. The decrease in the percentage of pinholes from 2.05 (Bare ITO) to 1.46 (after DNA hybridization) suggested the capacitive behavior of the DNA sensor chip. The results of non-Faradic EIS studies of DNA sensor chip showed a systematic declining trend of the capacitance as well as the relative permittivity upon DNA hybridization. DNA sensor chip exhibited linearity in 0.5 to 25 pg/10mL for E. coli O157:H7 (ATCC 43895). The limit of detection (LOD) at 95% confidence estimated by logistic regression was 0.1 pg DNA/10mL of E. coli O157:H7 (equivalent to 13.67 CFU/10mL) with a p-value of 0.0237. Moreover, the fabricated DNA sensor chip used for detection of E. coli O157:H7 showed no significant cross-reactivity with closely and distantly related bacteria such as Escherichia coli MTCC 3221, Escherichia coli O78:H11 MTCC 723 and Bacillus subtilis MTCC 736. Consequently, the results obtained in our study demonstrated the possible application of developed DNA sensor chips for E. coli O157:H7 ATCC 43895 in real water samples as well.

Keywords: capacitance, DNA sensor, Escherichia coli O157:H7, open reading frame marker

Procedia PDF Downloads 138
3736 Sustainable Mangrove Environment and Biodiversity of Gastropods and Crabs: A Case Study on the Effect of Mangrove Replantation under Ecotourism and Restoration in Ko Libong, Trang, Thailand

Authors: Wah Wah Min

Abstract:

The relative abundance and diversities of gastropods and crabs were assessed for mangrove areas of Ko Libong, Kantang district, Trang, Thailand in June 2022. Two sample sites (I and II) were studied. The site I was replanted under ecotourism, whereas site II represented the protected natural restored mangroves. This study is aimed to assess faunal diversity and how it could become re-established and resemble to natural restored mangroves. There was one sample plot at each study site with the dimension (10m x 25m) in study site I and (20m x 30m) in site II. The sample was randomly taken from each plot by using a quadrate measuring at (1 m2) in site I and (3m2) in site II; there were four quadrates in total of each site. The species richness (S), Shannon Index (H’) and Evenness Index (J’), vegetative measurements and physico-chemical parameters were calculated for each site. Seventeen gastropod species belonged to 11 families and six crab species under two families, which were collected in both study sites. Overall, in gastropod species, the highest relative abundance of Nerita planospira exhibited (53.45%, category C) with lower population density (1.61 individuals/m2), whichwas observed in study site II and for crab species, Parasesarma plicatum (83.33%, category C) with lower population density (0.33 individuals/m2). The diversity indices of gastropod species at the study site I was calculated higher indicating by (S= 12, H’= 2.27, J’ and SDI=0.91) compared to study site II (S= 7, H’= 1.22, J’ and SDI=0.63, 0.62). For the crabs, (S= 4, H’=1.33, J’ and SDI=0.96, 0.9) in study site I and (S= 2, H’=0.64, J’ and SDI=0.92, 0.67) in site II. Overall, the higher species diversity indices of study site I can be categorized “very equally” with a very good category according to evenness criteria (>0.81). This can be gained by increasing restoration sites through an ecotourism replanting program for achieving the goals of sustainable development for mangrove conservation and long-term studies are required to confirm this hypothesis.

Keywords: biodiversity, ecotourism, restoration, population

Procedia PDF Downloads 110
3735 Effectiveness of High-Intensity Interval Training in Overweight Individuals between 25-45 Years of Age Registered in Sports Medicine Clinic, General Hospital Kalutara

Authors: Dimuthu Manage

Abstract:

Introduction: The prevalence of obesity and obesity-related non-communicable diseases are becoming a massive health concern in the whole world. Physical activity is recognized as an effective solution for this matter. The published data on the effectiveness of High-Intensity Interval Training (HIIT) in improving health parameters in overweight and obese individuals in Sri Lanka is sparse. Hence this study is conducted. Methodology: This is a quasi-experimental study that was conducted at the Sports medicine clinic, General Hospital, Kalutara. Participants have engaged in a programme of HIIT three times per week for six weeks. Data collection was based on precise measurements by using structured and validated methods. Ethical clearance was obtained. Results: Registered number for the study was 48, and only 52% have completed the study. The mean age was 32 (SD=6.397) years, with 64% males. All the anthropometric measurements which were assessed (i.e. waist circumference(P<0.001), weight(P<0.001) and BMI(P<0.001)), body fat percentage(P<0.001), VO2 max(P<0.001), and lipid profile (ie. HDL(P=0.016), LDL(P<0.001), cholesterol(P<0.001), triglycerides(P<0.010) and LDL: HDL(P<0.001)) had shown statistically significant improvement after the intervention with the HIIT programme. Conclusions: This study confirms HIIT as a time-saving and effective exercise method, which helps in preventing obesity as well as non-communicable diseases. HIIT ameliorates body anthropometry, fat percentage, cardiopulmonary status, and lipid profile in overweight and obese individuals markedly. As with the majority of studies, the design of the current study is subject to some limitations. The first is the study focused on a correlational study. If it is a comparative study, comparing it with other methods of training programs would have given more validity. Although the validated tools used to measure variables and the same tools used in pre and post-exercise occasions with the available facilities, it would have been better to measure some of them using gold-standard methods. However, this evidence should be further assessed in larger-scale trials using comparative groups to generalize the efficacy of the HIIT exercise program.

Keywords: HIIT, lipid profile, BMI, VO2 max

Procedia PDF Downloads 61
3734 Sigma-Delta ADCs Converter a Study Case

Authors: Thiago Brito Bezerra, Mauro Lopes de Freitas, Waldir Sabino da Silva Júnior

Abstract:

The Sigma-Delta A/D converters have been proposed as a practical application for A/D conversion at high rates because of its simplicity and robustness to imperfections in the circuit, also because the traditional converters are more difficult to implement in VLSI technology. These difficulties with conventional conversion methods need precise analog components in their filters and conversion circuits, and are more vulnerable to noise and interference. This paper aims to analyze the architecture, function and application of Analog-Digital converters (A/D) Sigma-Delta to overcome these difficulties, showing some simulations using the Simulink software and Multisim.

Keywords: analysis, oversampling modulator, A/D converters, sigma-delta

Procedia PDF Downloads 322
3733 Investigating the Relative Priority of the Factors Affecting Customer Satisfaction in Gaining the Competitive Advantage in Pars-Khazar Company

Authors: Samaneh Pouyanfar, Michael Oliff

Abstract:

The industry of home appliances may beone of theindustries which has the highest competition, and actually what can guarantee the survival of this industry is discovering the superior services. A trend to provide quality products and services plays an important role in this industry because discovering the services is counted as a vital affair for Manufacturing Organizations’ survival and profitability. Given the importance of the topic, this paper attempts to investigate the relative priority of the factors influencing the customer satisfaction in gaining the competitive advantage in Pars-Khazar Company. In sum, 96 executives of Pars-Khazar Company where investigated in a census. For this purpose, after reviewing the research literature and performing deep interviews between pundits and experts active in the industry, the research questionnaire was made based on variables affecting customer satisfaction and components determining business competitive advantage. Determining the content validity took place by judgement of the experts. The reliability of each structure was measured based on Cronbach’s alpha coefficient. Since the value of Cronbach's alpha was higher than 0.7 for each structure, internal consistency of statements was high and the reliability of the questionnaire was acceptable. The data analysis was also done with Kulmgrf-asmyrnf test and Friedman test using SPSS software. The results showed that in dimension of factors affecting customer satisfaction, the History of trade name (brand), Familiarity with the product brand, Brand reputation and Safety have the highest value of priority respectively, and the variable of firm growth has the highest value of priority among the components determining the performance of competitive advantage.

Keywords: customer satisfaction, competitive advantage, brand history, safety, growth

Procedia PDF Downloads 224
3732 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel

Authors: Said Elkassimi, Said Safi, B. Manaut

Abstract:

This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.

Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF

Procedia PDF Downloads 308
3731 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 132
3730 Bacterial Interactions of Upper Respiratory Tract Microbiota

Authors: Sarah Almuhayya, Andrew Mcbain, Gavin Humphreys

Abstract:

Background. The microbiome of the upper respiratory tract (URT) has received less research attention than other body sites. This study aims to investigate the microbial ecology of the human URT with a focus on the antagonism between the corynebacteria and staphylococci. Methods. Mucosal swabs were collected from the anterior nares and nasal turbinates of 20 healthy adult subjects. Genomic DNA amplification targeting the (V4) of the 16Sr RNA gene was conducted and analyzed using QIIME. Nasal swab isolates were cultured and identified using near full-length sequencing of the 16S rRNA gene. Isolates identified as corynebacteria or staphylococci were typed using (rep-PCR). Antagonism was determined using an agar-based inhibition assay. Results. Four major bacterial phyla (Actinobacteria, Bacteroidetes, Firmicutes, and Proteobacteria) were identified from all volunteers. The typing of cultured staphylococci and corynebacteria suggested that intra-individual strain diversity was limited. Analysis of generated nasal microbiota profiles suggested an inverse correlation in terms of relative abundance between staphylococci and corynebacteria. Despite the apparent antagonism between these genera, it was limited when investigated on agar. Of 1000 pairwise interactions, observable zones of inhibition were only reported between a single strain of C.pseudodiphtheriticum and S.aureus. Imaging under EM revealed this effect to be bactericidal with clear lytic effects on staphylococcal cell morphology. Conclusion. Nasal microbiota is complex, but culturable staphylococci and corynebacteria were limited in terms of clone type. Analysis of generated nasal microbiota profiles suggested an inverse correlation in terms of relative abundance between these genera suggesting an antagonism or competition between these taxonomic groups.

Keywords: nasal, microbiota, S.aureus, microbioal interaction

Procedia PDF Downloads 104
3729 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 294
3728 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method

Authors: A.R. Eskandari, M.R. Eskandari

Abstract:

A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.

Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)

Procedia PDF Downloads 379
3727 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 366
3726 Site Investigations and Mitigation Measures of Landslides in Sainj and Tirthan Valley of Kullu District, Himachal Pradesh, India

Authors: Laxmi Versain, R. S. Banshtu

Abstract:

Landslides are found to be the most commonly occurring geological hazards in the mountainous regions of the Himalaya. This mountainous zone is facing large number of seismic turbulences, climatic changes, and topography changes due to increasing urbanization. That eventually has lead several researchers working for best suitable methodologies to infer the ultimate results. Landslide Hazard Zonation has widely come as suitable method to know the appropriate factors that trigger the lansdslide phenomenon on higher reaches. Most vulnerable zones or zones of weaknesses are indentified and safe mitigation measures are to be suggested to mitigate and channelize the study of an effected area. Use of Landslide Hazard Zonation methodology in relative zones of weaknesses depend upon the data available for the particular site. The causative factors are identified and data is made available to infer the results. Factors like seismicity in mountainous region have closely associated to make the zones of thrust and faults or lineaments more vulnerable. Data related to soil, terrain, rainfall, geology, slope, nature of terrain, are found to be varied for various landforms and areas. Thus, the relative causes are to be identified and classified by giving specific weightage to each parameter. Factors which cause the instability of slopes are several and can be grouped to infer the potential modes of failure. The triggering factors of the landslides on the mountains are not uniform. The urbanization has crawled like ladder and emergence of concrete jungles are in a very fast pace on hilly region of Himalayas. The local terrains has largely been modified and hence instability of several zones are triggering at very fast pace. More strategic and pronounced methods are required to reduce the effect of landslide.

Keywords: zonation, LHZ, susceptible, weightages, methodology

Procedia PDF Downloads 193
3725 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 222
3724 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 94
3723 Migration in Times of Uncertainty

Authors: Harman Jaggi, David Steinsaltz, Shripad Tuljapurkar

Abstract:

Understanding the effect of fluctuations on populations is crucial in the context of increasing habitat fragmentation, climate change, and biological invasions, among others. Migration in response to environmental disturbances enables populations to escape unfavorable conditions, benefit from new environments and thereby ride out fluctuations in variable environments. Would populations disperse if there is no uncertainty? Karlin showed in 1982 that when sub-populations experience distinct but fixed growth rates at different sites, greater mixing of populations will lower the overall growth rate relative to the most favorable site. Here we ask if and when environmental variability favors migration over no-migration. Specifically, in random environments, would a small amount of migration increase the overall long-run growth rate relative to the zero migration case? We use analysis and simulations to show how long-run growth rate changes with migration rate. Our results show that when fitness (dis)advantages fluctuate over time across sites, migration may allow populations to benefit from variability. When there is one best site with highest growth rate, the effect of migration on long-run growth rate depends on the difference in expected growth between sites, scaled by the variance of the difference. When variance is large, there is a substantial probability of an inferior site experiencing higher growth rate than its average. Thus, a high variance can compensate for a difference in average growth rates between sites. Positive correlations in growth rates across sites favor less migration. With multiple sites and large fluctuations, the length of shortest cycle (excursion) from the best site (on average) matters, and we explore the interplay between excursion length, average differences between sites and the size of fluctuations. Our findings have implications for conservation biology: even when there are superior sites in a sea of poor habitats, variability and habitat quality across space may be key to determining the importance of migration.

Keywords: migration, variable-environments, random, dispersal, fluctuations, habitat-quality

Procedia PDF Downloads 134
3722 Information-Controlled Laryngeal Feature Variations in Korean Consonants

Authors: Ponghyung Lee

Abstract:

This study seeks to investigate the variations occurring to Korean consonantal variations center around laryngeal features of the concerned sounds, to the exclusion of others. Our fundamental premise is that the weak contrast associated with concerned segments might be held accountable for the oscillation of the status quo of the concerned consonants. What is more, we assume that an array of notions as a measure of communicative efficiency of linguistic units would be significantly influential on triggering those variations. To this end, we have tried to compute the surprisal, entropic contribution, and relative contrastiveness associated with Korean obstruent consonants. What we found therein is that the Information-theoretic perspective is compelling enough to lend support our approach to a considerable extent. That is, the variant realizations, chronologically and stylistically, prove to be profoundly affected by a set of Information-theoretic factors enumerated above. When it comes to the biblical proper names, we use Georgetown University CQP Web-Bible corpora. From the 8 texts (4 from Old Testament and 4 from New Testament) among the total 64 texts, we extracted 199 samples. We address the issue of laryngeal feature variations associated with Korean obstruent consonants under the presumption that the variations stem from the weak contrast among the triad manifestations of laryngeal features. The variants emerge from diverse sources in chronological and stylistic senses: Christianity biblical texts, ordinary casual speech, the shift of loanword adaptation over time, and ideophones. For the purpose of discussing what they are really like from the perspective of Information Theory, it is necessary to closely look at the data. Among them, the massive changes occurring to loanword adaptation of proper nouns during the centennial history of Korean Christianity draw our special attention. We searched 199 types of initially capitalized words among 45,528-word tokens, which account for around 5% of total 901,701-word tokens (12,786-word types) from Georgetown University CQP Web-Bible corpora. We focus on the shift of the laryngeal features incorporated into word-initial consonants, which are available through the two distinct versions of Korean Bible: one came out in the 1960s for the Protestants, and the other was published in the 1990s for the Catholic Church. Of these proper names, we have closely traced the adaptation of plain obstruents, e. g. /b, d, g, s, ʤ/ in the sources. The results show that as much as 41% of the extracted proper names show variations; 37% in terms of aspiration, and 4% in terms of tensing. This study set out in an effort to shed light on the question: to what extent can we attribute the variations occurring to the laryngeal features associated with Korean obstruent consonants to the communicative aspects of linguistic activities? In this vein, the concerted effects of the triad, of surprisal, entropic contribution, and relative contrastiveness can be credited with the ups and downs in the feature specification, despite being contentiousness on the role of surprisal to some extent.

Keywords: entropic contribution, laryngeal feature variation, relative contrastiveness, surprisal

Procedia PDF Downloads 123