Search results for: linearly constrained minimum variance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3532

Search results for: linearly constrained minimum variance

3382 Phytochemical and in vitro Antimicrobial Screening of Extract of Sunflower Chrysanthlum indicum

Authors: I. Ibrahim, A. Mann

Abstract:

Phytochemical screening of crude Chrysanthlum Indicum revealed the presence of carbohydrates, flavonoids, saponins, tannins, alkanoids, steroidal nucleus and cardiac glycosides. The extract was evaluated against some pathogenic organisms by agar dilution method. The minimum inhibitory concentration and minimum bacteriocidal concentration (MBC) of the active extract of Chrysanthlum Indicum shows that its extract could be a potential source of antimicrobial agents.

Keywords: extract, phytochemicals, antimicrobial, antibacterial, Chrysanthlum indicum

Procedia PDF Downloads 541
3381 A Priority Based Imbalanced Time Minimization Assignment Problem: An Iterative Approach

Authors: Ekta Jain, Kalpana Dahiya, Vanita Verma

Abstract:

This paper discusses a priority based imbalanced time minimization assignment problem dealing with the allocation of n jobs to m < n persons in which the project is carried out in two stages, viz. Stage-I and Stage-II. Stage-I consists of n1 ( < m) primary jobs and Stage-II consists of remaining (n-n1) secondary jobs which are commenced only after primary jobs are finished. Each job is to be allocated to exactly one person, and each person has to do at least one job. It is assumed that nature of the Stage-I jobs is such that one person can do exactly one primary job whereas a person can do more than one secondary job in Stage-II. In a particular stage, all persons start doing the jobs simultaneously, but if a person is doing more than one job, he does them one after the other in any order. The aim of the proposed study is to find the feasible assignment which minimizes the total time for the two stage execution of the project. For this, an iterative algorithm is proposed, which at each iteration, solves a constrained imbalanced time minimization assignment problem to generate a pair of Stage-I and Stage-II times. For solving this constrained problem, an algorithm is developed in the current paper. Later, alternate combinations based method to solve the priority based imbalanced problem is also discussed and a comparative study is carried out. Numerical illustrations are provided in support of the theory.

Keywords: assignment, imbalanced, priority, time minimization

Procedia PDF Downloads 201
3380 The Hall Coefficient and Magnetoresistance in Rectangular Quantum Wires with Infinitely High Potential under the Influence of a Laser Radiation

Authors: Nguyen Thu Huong, Nguyen Quang Bau

Abstract:

The Hall Coefficient (HC) and the Magnetoresistance (MR) have been studied in two-dimensional systems. The HC and the MR in Rectangular Quantum Wire (RQW) subjected to a crossed DC electric field and magnetic field in the presence of a Strong Electromagnetic Wave (EMW) characterized by electric field are studied in this work. Using the quantum kinetic equation for electrons interacting with optical phonons, we obtain the analytic expressions for the HC and the MR with a dependence on magnetic field, EMW frequency, temperatures of systems and the length characteristic parameters of RQW. These expressions are different from those obtained for bulk semiconductors and cylindrical quantum wires. The analytical results are applied to GaAs/GaAs/Al. For this material, MR depends on the ratio of the EMW frequency to the cyclotron frequency. Indeed, MR reaches a minimum at the ratio 5/4, and when this ratio increases, it tends towards a saturation value. The HC can take negative or positive values. Each curve has one maximum and one minimum. When magnetic field increases, the HC is negative, achieves a minimum value and then increases suddenly to a maximum with a positive value. This phenomenon differs from the one observed in cylindrical quantum wire, which does not have maximum and minimum values.

Keywords: hall coefficient, rectangular quantum wires, electron-optical phonon interaction, quantum kinetic equation

Procedia PDF Downloads 457
3379 The Effect of Emotional Support towards Quality of Work Life on Balinese Working Women

Authors: I. Ketut Yoga Adityawira, Putu Ayu Novia Viorica, Komang Rahayu Indrawati

Abstract:

In addition to work and take care of the family, Balinese women also have a role to participate in social activities in Bali. So this will have an impact on the quality of work life of Balinese women. One way to reduce the impact of the fulfillment of the role of Balinese women namely through emotional support. The aim of this research is to find out the effect of emotional support towards the quality of work life on Balinese working women. Data were retrieved by quasi-experimental method with pretest-posttest design. Data were analyzed by Analysis of Variance (ANOVA) through SPSS 17.0 for Windows. The number of subjects in this research is 30 people with the criteria: Balinese Women, aged 27 to 55 years old, have a minimum of two years experience of work and has been married. The analysis showed that there is no effect of emotional support towards the quality of work life on Balinese working women, with information there is no significant of probability value p = 0.304 (p > 0.05).

Keywords: Balinese women, emotional support, quality of work life, working women

Procedia PDF Downloads 183
3378 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 159
3377 Towards Computational Fluid Dynamics Based Methodology to Accelerate Bioprocess Scale Up and Scale Down

Authors: Vishal Kumar Singh

Abstract:

Bioprocess development is a time-constrained activity aimed at harnessing the full potential of culture performance in an ambience that is not natural to cells. Even with the use of chemically defined media and feeds, a significant amount of time is devoted in identifying the apt operating parameters. In addition, the scale-up of these processes is often accompanied by loss of antibody titer and product quality, which further delays the commercialization of the drug product. In such a scenario, the investigation of this disparity of culture performance is done by further experimentation at a smaller scale that is representative of at-scale production bioreactors. These scale-down model developments are also time-intensive. In this study, a computation fluid dynamics-based multi-objective scaling approach has been illustrated to speed up the process transfer. For the implementation of this approach, a transient multiphase water-air system has been studied in Ansys CFX to visualize the air bubble distribution and volumetric mass transfer coefficient (kLa) profiles, followed by the design of experiment based parametric optimization approach to define the operational space. The proposed approach is completely in silico and requires minimum experimentation, thereby rendering a high throughput to the overall process development.

Keywords: bioprocess development, scale up, scale down, computation fluid dynamics, multi-objective, Ansys CFX, design of experiment

Procedia PDF Downloads 50
3376 Minimum Wages and Its Impact on Agriculture and Non Agricultural Sectors with Special Reference to Recent Labour Reforms in India

Authors: Bikash Kumar Malick

Abstract:

Labour reform is a most celebrated theme for policy makers, at the same time it is also a most misunderstood and skeptical concept even for the educated masses in India. One of the widely focused and discussed topics which needs an in-depth examination is India’s labour laws. It may actually help to reach points to understand the exact requirements in labour reforms by making the labour laws more simple and concise in form and its implementation. It is also a requirement to guide states in India in terms of making laws on it as Indian Constitution itself is federal in form and unitary in spirit. Recently, Codes of Wages Bill has been introduced in Indian Parliament while other three codes are waiting to come in the same line and those codes actually highlight the simplified features of labour laws to enable labour reform in a succinct manner. However, it still brings more confusion in minds of people. To wipe out the confusion and to bring a note and to put it for correlation among the labour reforms of both centre and states which both generates employment and make growth sustainable in India providing clear public understanding. This time is also ripe minimizing the apprehension about all the coming labour laws simplified in different codes in India. This article attempts to highlight the need of labour reform and its possible impact. It also examines the higher rates of minimum wages and its links with its coverage agriculture and nonagricultural sectors (including mines) over the period time. It also takes into consideration of central sphere and in states sphere minimum wage which are linked with Consumer Price Index to bring into account the living standard of workers and to examine the cause and effect between minimum wage and output in both agriculture and non agricultural sector with regression analysis. Increase in minimum wage has actually strengthened the sustainable output.

Keywords: codes of wages, indian constitution, minimum wage, labour laws, labour reforms

Procedia PDF Downloads 172
3375 High-Temperature Corrosion of Weldment of Fe-2%Mn-0.5%Si Steel in N2/H2O/H2S-Mixed Gas

Authors: Sang Hwan Bak, Min Jung Kim, Dong Bok Lee

Abstract:

Fe-2%Mn-0.5%Si-0.2C steel was welded and corroded at 600, 700 and 800oC for 20 h in 1 atm of N2/H2S/H2O-mixed gas in order to characterize the high-temperature corrosion behavior of the welded joint. Corrosion proceeded fast and almost linearly. It increased with an increase in the corrosion temperature. H2S formed FeS owing to sulfur released from H2S. The scales were fragile and nonadherent.

Keywords: Fe-Mn-Si steel, corrosion, welding, sulfidation, H2S gas

Procedia PDF Downloads 375
3374 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation

Procedia PDF Downloads 114
3373 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: degree, initial cluster center, k-means, minimum spanning tree

Procedia PDF Downloads 377
3372 Determining Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin

Authors: Naci Büyükkaracığan

Abstract:

Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.

Keywords: Gediz Basin, goodness-of-fit tests, minimum flows, probability distribution

Procedia PDF Downloads 246
3371 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain

Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki

Abstract:

The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.

Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy

Procedia PDF Downloads 44
3370 Balancing Electricity Demand and Supply to Protect a Company from Load Shedding: A Review

Authors: G. W. Greubel, A. Kalam

Abstract:

South Africa finds itself at a confluence of forces where the national electricity supply system is constrained with under-supply primarily from old and failing coal-fired power stations and congested and inadequate transmission and distribution systems. Simultaneously the country attempts to meet carbon reduction targets driven by both an alignment with international goals and a consumer-driven requirement. The constrained electricity system is an aspect of an economy characterized by very low economic growth, high unemployment, and frequent and significant load shedding. The fiscus does not have the funding to build new generation capacity or strengthen the grid. The under-supply is increasingly alleviated by the penetration of wind and solar generation capacity and embedded roof-top solar. However, this increased penetration results in less inertia, less synchronous generation, and less capability for fast frequency response, with resultant instability. The renewable energy facilities assist in solving the under-supply issues, but merely ‘kick the can down the road’ by not contributing to grid stability or by substituting the lost inertia, thus creating an expanding issue for the grid to manage. By technically balancing its electricity demand and supply a company with facilities located across the country can be spared the effects of load shedding, and thus ensure financial and production performance, protect jobs, and contribute meaningfully to the economy. By treating the company’s load (across the country) and its various distributed generation facilities as a ‘virtual grid’ which by design will provide ancillary services to the grid one is able to create a win-win situation for both the company and the grid. This paper provides a review of the technical problems facing the South African electricity system and discusses a hypothetical ‘virtual grid’ concept that may assist in solving the problems. The proposed solution has potential application across emerging markets with constrained power infrastructure or for companies who wish to be entirely powered by renewable energy.

Keywords: load shedding, renewable energy integration, smart grid, virtual grid

Procedia PDF Downloads 19
3369 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 47
3368 Information Theoretic Approach for Beamforming in Wireless Communications

Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif

Abstract:

Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.

Keywords: beamforming, interference, mutual information, wireless communications

Procedia PDF Downloads 251
3367 Digital Transformation and Environmental Disclosure in Industrial Firms: The Moderating Role of the Top Management Team

Authors: Yongxin Chen, Min Zhang

Abstract:

As industrial enterprises are the primary source of national pollution, environmental information disclosure is a crucial way to demonstrate to stakeholders the work they have done in fulfilling their environmental responsibilities and accepting social supervision. In the era of the digital economy, many companies, actively embracing the opportunities that come with digital transformation, have begun to apply digital technology to information collection and disclosure within the enterprise. However, less is known about the relationship between digital transformation and environmental disclosure. This study investigates how enterprise digital transformation affects environmental disclosure in 643 Chinese industrial companies, according to information processing theory. What is intriguing is that the depth (size) and breadth (diversity) of environmental disclosure linearly increase with the rise in the collection, processing, and analytical capabilities in the digital transformation process. However, the volume of data will grow exponentially, leading to a marginal increase in the economic and environmental costs of utilizing, storing, and managing data. In our empirical findings, linearly increasing benefits and marginal costs create a unique inverted U-shaped relationship between the degree of digital transformation and environmental disclosure in the Chinese industrial sector. Besides, based on the upper echelons theory, we also propose that the top management team with high stability and managerial capabilities will invest more effort and expense into improving environmental disclosure quality, lowering the carbon footprint caused by digital technology, maintaining data security etc. In both these contexts, the increasing marginal cost curves would become steeper, weakening the inverted U-shaped slope between DT and ED.

Keywords: digital transformation, environmental disclosure, the top management team, information processing theory, upper echelon theory

Procedia PDF Downloads 100
3366 Coefficient of Performance (COP) Optimization of an R134a Cross Vane Expander Compressor Refrigeration System

Authors: Y. D. Lim, K. S. Yap, K. T. Ooi

Abstract:

Cross Vane Expander Compressor (CVEC) is a newly invented expander-compressor combined unit, where it is introduced to replace the compressor and the expansion valve in traditional refrigeration system. The mathematical model of CVEC has been developed to examine its performance, and it was found that the energy consumption of a conventional refrigeration system was reduced by as much as 18%. It is believed that energy consumption can be further reduced by optimizing the device. In this study, the coefficient of performance (COP) of CVEC has been optimized under predetermined operational parameters and constrained main design parameters. Several main design parameters of CVEC were selected to be the variables, and the optimization was done with theoretical model in a simulation program. The theoretical model consists of geometrical model, dynamic model, heat transfer model and valve dynamics model. Complex optimization method, which is a constrained, direct search and multi-variables method was used in the study. As a result, the optimization study suggested that with an appropriate combination of design parameters, a 58% COP improvement in CVEC R134a refrigeration system is possible.

Keywords: COP, cross vane expander-compressor, CVEC, design, simulation, refrigeration system, air-conditioning, R134a, multi variables

Procedia PDF Downloads 302
3365 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 38
3364 Frequency Analysis of Minimum Ecological Flow and Gage Height in Indus River Using Maximum Likelihood Estimation

Authors: Tasir Khan, Yejuan Wan, Kalim Ullah

Abstract:

Hydrological frequency analysis has been conducted to estimate the minimum flow elevation of the Indus River in Pakistan to protect the ecosystem. The Maximum likelihood estimation (MLE) technique is used to estimate the best-fitted distribution for Minimum Ecological Flows at nine stations of the Indus River in Pakistan. The four selected distributions, Generalized Extreme Value (GEV) distribution, Generalized Logistics (GLO) distribution, Generalized Pareto (GPA) distribution, and Pearson type 3 (PE3) are fitted in all sites, usually used in hydro frequency analysis. Compare the performance of these distributions by using the goodness of fit tests, such as the Kolmogorov Smirnov test, Anderson darling test, and chi-square test. The study concludes that the Maximum Likelihood Estimation (MLE) method recommended that GEV and GPA are the most suitable distributions which can be effectively applied to all the proposed sites. The quantiles are estimated for the return periods from 5 to 1000 years by using MLE, estimations methods. The MLE is the robust method for larger sample sizes. The results of these analyses can be used for water resources research, including water quality management, designing irrigation systems, determining downstream flow requirements for hydropower, and the impact of long-term drought on the country's aquatic system.

Keywords: minimum ecological flow, frequency distribution, indus river, maximum likelihood estimation

Procedia PDF Downloads 53
3363 Genetic Analysis of Iron, Phosphorus, Potassium and Zinc Concentration in Peanut

Authors: Ajay B. C., Meena H. N., Dagla M. C., Narendra Kumar, Makwana A. D., Bera S. K., Kalariya K. A., Singh A. L.

Abstract:

The high-energy value, protein content and minerals makes peanut a rich source of nutrition at comparatively low cost. Basic information on genetics and inheritance of these mineral elements is very scarce. Hence, in the present study inheritance (using additive-dominance model) and association of mineral elements was studied in two peanut crosses. Dominance variance (H) played an important role in the inheritance of P, K, Fe and Zn in peanut pods. Average degree of dominance for most of the traits was greater than unity indicating over dominance for these traits. Significant associations were also observed among mineral elements both in F2 and F3 generations but pod yield had no associations with mineral elements (with few exceptions). Di-allele/bi-parental mating could be followed to identify high yielding and mineral dense segregates.

Keywords: correlation, dominance variance, mineral elements, peanut

Procedia PDF Downloads 383
3362 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 142
3361 A Profile of an Exercise Addict: The Relationship between Exercise Addiction and Personality

Authors: Klary Geisler, Dalit Lev-Arey, Yael Hacohen

Abstract:

It is a well-known fact that exercise has favorable effects on people's physical health, as well as mental well-being. However, as for as excessive exercise, it may likely elevate negative consequences (e.g., physical injuries, negligence of everyday responsibilities such as work, family life). Lately, there is a growing interest in exercise addiction, sometimes referred to as exercise dependence, which is defined as a craving for physical activity that results in extreme work-out sessions and generates negative physiological and psychological symptoms (e.g., withdrawal symptoms, tolerance, social conflict). Exercise addiction is considered a behavioral addiction, yet it was not included in the latest editions of the diagnostic and statistical manual of mental disorders (DSM-IV), due to lack of significant research. Specifically, there is scarce research on the relationship between exercise addiction and personality dimensions. The purpose of the current research was to examine the relationship between primary exercise addiction symptoms and the big five dimensions, perfectionism (high performance expectations and self-critical performance evaluations) and subjective affect. participants were 152 trainees on a variety of aerobic sports activities (running, cycling, swimming) that were recruited through sports groups and trainers. 88% of participants trained for at least 5 hours per week, 24% of the participants trained above 10 hours per week. To test the predictive ability of the IVs a hierarchical linear regression with forced block entry was performed. It was found that Neuroticism significantly predicted exercise addiction symptoms (20% of the variance, p<0.001), while consciousness was negatively correlated with exercise addiction symptoms (14% of variance p<0.05); both had a unique contribution. Other dimensions of the big five (agreeableness, openness and extraversion) did not have any contribution to the dependent. Moreover, maladaptive perfectionism (self-critical performance evaluations) significantly predicted exercise addiction symptoms as well (10% of the variance P < 0.05). The overall regression model explained 54% of variance.

Keywords: big five, consciousness, excessive exercise, exercise addiction, neuroticism, perfectionism, personality

Procedia PDF Downloads 193
3360 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions

Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani

Abstract:

To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.

Keywords: correlation, drought stress, factor analysis, rapeseed

Procedia PDF Downloads 151
3359 EWMA and MEWMA Control Charts for Monitoring Mean and Variance in Industrial Processes

Authors: L. A. Toro, N. Prieto, J. J. Vargas

Abstract:

There are many control charts for monitoring mean and variance. Among these, the X y R, X y S, S2 Hotteling and Shewhart control charts, for mentioning some, are widely used for monitoring mean a variance in industrial processes. In particular, the Shewhart charts are based on the information about the process contained in the current observation only and ignore any information given by the entire sequence of points. Moreover, that the Shewhart chart is a control chart without memory. Consequently, Shewhart control charts are found to be less sensitive in detecting smaller shifts, particularly smaller than 1.5 times of the standard deviation. These kind of small shifts are important in many industrial applications. In this study and effective alternative to Shewhart control chart was implemented. In case of univariate process an Exponentially Moving Average (EWMA) control chart was developed and Multivariate Exponentially Moving Average (MEWMA) control chart in case of multivariate process. Both of these charts were based on memory and perform better that Shewhart chart while detecting smaller shifts. In these charts, information the past sample is cumulated up the current sample and then the decision about the process control is taken. The mentioned characteristic of EWMA and MEWMA charts, are of the paramount importance when it is necessary to control industrial process, because it is possible to correct or predict problems in the processes before they come to a dangerous limit.

Keywords: control charts, multivariate exponentially moving average (MEWMA), exponentially moving average (EWMA), industrial control process

Procedia PDF Downloads 327
3358 Designing a Cricket Team Selection Method Using Super-Efficient DEA and Semi Variance Approach

Authors: Arnab Adhikari, Adrija Majumdar, Gaurav Gupta, Arnab Bisi

Abstract:

Team formation plays an instrumental role in the sports like cricket. Existing literature reveals that most of the works on player selection focus only on the players’ efficiency and ignore the consistency. It motivates us to design an improved player selection method based on both player’s efficiency and consistency. To measure the players’ efficiency measurement, we employ a modified data envelopment analysis (DEA) technique namely ‘super-efficient DEA model’. We design a modified consistency index based on semi variance approach. Here, we introduce a new parameter called ‘fitness index’ for consistency computation to assess a player’s fitness level. Finally, we devise a single performance score using both efficiency score and consistency score with the help of a linear programming model. To test the robustness of our method, we perform a rigorous numerical analysis to determine the all-time best One Day International (ODI) Cricket XI. Next, we conduct extensive comparative studies regarding efficiency scores, consistency scores, selected team between the existing methods and the proposed method and explain the rationale behind the improvement.

Keywords: decision support systems, sports, super-efficient data envelopment analysis, semi variance approach

Procedia PDF Downloads 373
3357 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare

Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar

Abstract:

Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.

Keywords: aggregation, cipher, homomorphic stream, encryption

Procedia PDF Downloads 225
3356 High Precision 65nm CMOS Rectifier for Energy Harvesting using Threshold Voltage Minimization in Telemedicine Embedded System

Authors: Hafez Fouad

Abstract:

Telemedicine applications have very low voltage which required High Precision Rectifier Design with high Sensitivity to operate at minimum input Voltage. In this work, we targeted 0.2V input voltage using 65 nm CMOS rectifier for Energy Harvesting Telemedicine application. The proposed rectifier which designed at 2.4GHz using two-stage structure found to perform in a better case where minimum operation voltage is lower than previous published paper and the rectifier can work at a wide range of low input voltage amplitude. The Performance Summary of Full-wave fully gate cross-coupled rectifiers (FWFR) CMOS Rectifier at F = 2.4 GHz: The minimum and maximum output voltages generated using an input voltage amplitude of 2 V are 490.9 mV and 1.997 V, maximum VCE = 99.85 % and maximum PCE = 46.86 %. The Performance Summary of Differential drive CMOS rectifier with external bootstrapping circuit rectifier at F = 2.4 GHz: The minimum and maximum output voltages generated using an input voltage amplitude of 2V are 265.5 mV (0.265V) and 1.467 V respectively, maximum VCE = 93.9 % and maximum PCE= 15.8 %.

Keywords: energy harvesting, embedded system, IoT telemedicine system, threshold voltage minimization, differential drive cmos rectifier, full-wave fully gate cross-coupled rectifiers CMOS rectifier

Procedia PDF Downloads 121
3355 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 120
3354 Inheritance of Protein Content and Grain Yield in Half Diallel Maize (Zea mays L.) Populations

Authors: Gül Ebru Orhun

Abstract:

A half diallel crossing design was carried out during 2011 and 2012 growing seasons under Çanakkale-Turkey ecological conditions. In this research, 20 F1 maize hybrids obtained by 6x6 half diallel crossing were used. Gene action for protein content and grain yield traits were explored in half set involving six elite inbred lines. According to the results diallel analysis dominance and additive gene variances were determined for protein content. Variance/Co-variance graphs revealed for grain yield and protein content traits. In this study, inheritance of grain yield and protein content demonstrated over-dominance type of gene action.

Keywords: protein, maize, inheritance, gene action

Procedia PDF Downloads 492
3353 Improvement of the Melon (Cucumis melo L.) through Genetic Gain and Discriminant Function

Authors: M. R. Naroui Rad, H. Fanaei, A. Ghalandarzehi

Abstract:

To find out the yield of melon, the traits are vital. This research was performed with the objective to assess the impact of nine different morphological traits on the production of 20 melon landraces in the sistan weather region. For all the traits genetic variation was noted. Minimum genetical variance (9.66) along with high genetic interaction with the environment led to low heritability (0.24) of the yield. The broad sense heritability of the traits that were included into the differentiating model was more than it was in the production. In this study, the five selected traits, number of fruit, fruit weight, fruit width, flesh diameter and plant yield can differentiate the genotypes with high or low production. This demonstrated the significance of these 5 traits in plant breeding programs. Discriminant function of these 5 traits, particularly, the weight of the fruit, in case of the current outputs was employed as an all-inclusive parameter for pointing out landraces with the highest yield. 75% of variation in yield can be explained with this index, and the weight of fruit also has substantial relation with the total production (r=0.72**). This factor can be highly beneficial in case of future breeding program selections.

Keywords: melon, discriminant analysis, genetic components, yield, selection

Procedia PDF Downloads 296