Search results for: parameter and SOC estimation
2914 Parameter Measurement Systems to Evaluate Performance of Archers
Authors: Muhammad Zikril Hakim Md. Azizi, Norhafizan Ahmad, Raja Ariffin Raja Ghazilla
Abstract:
Postural stability, attention level of the archer and particularly the vibrations of the bow itself plays a prominent role in determining the athletes performance. Many techniques and systems had been developing to monitor the parameters of the archers during training. In Malaysia, archery coaches tend to use non-scientific ways that they are familiar with, to evaluate archer performance. An approach that provides more affordable yet accurate systems to the masses and relatively easy system deployment procedure need to be proposed. Hence, this project will address to fulfil the needs. Three area of the archer parameter were included for data monitoring sensors. Attention level can be measured using EEG sensor, centre of mass linked to the postural stability can be measured by foot pressure sensor, and the bow vibrations in three axis will be relayed by the vibrations sensors placed directly on the bow using wireless sensors. Arduino based microcontroller used to relay all the data back to the interfacing systems. Interface systems will be using Python language and C++ framework for user interface and hardware interfacing systems. All sensor data can be observed in real time using the in-house applications, and each sessions can be saved to common files so that coach and the team can have a further discussion and comparisons.Keywords: archery, graphical user interface, microcontroller, wireless sensor, monitoring system
Procedia PDF Downloads 2992913 Comparison of Two Theories for the Critical Laser Radius in Thermal Quantum Plasma
Authors: Somaye Zare
Abstract:
The critical beam radius is a significant factor that predicts the behavior of the laser beam in the plasma, so if the laser beam radius is adequately greater in comparison to it, the beam will experience stable focusing on the plasma; otherwise, the beam will diverge after entering into the plasma. In this work, considering the paraxial approximation and moment theories, the localization of a relativistic laser beam in thermal quantum plasma is investigated. Using the dielectric function obtained in the quantum hydrodynamic model, the mathematical equation for the laser beam width parameter is attained and solved numerically by the fourth-order Runge-Kutta method. The results demonstrate that the stouter focusing effect is occurred in the moment theory compared to the paraxial approximation. Besides, similar to the two theories, with increasing Fermi temperature, plasma density, and laser intensity, the oscillation rate of the beam width parameter growths and focusing length reduces which means improving the focusing effect. Furthermore, it is understood that behaviors of the critical laser radius are different in the two theories, in the paraxial approximation, the critical radius after a minimum value is enhanced with increasing laser intensity, but in the moment theory, with increasing laser intensity, the critical radius decreases until it becomes independent of the laser intensity.Keywords: laser localization, quantum plasma, paraxial approximation, moment theory, quantum hydrodynamic model
Procedia PDF Downloads 702912 On a Transient Magnetohydrodynamics Heat Transfer Within Radiative Porous Channel Due to Convective Boundary Condition
Authors: Bashiru Abdullahi, Isah Bala Yabo, Ibrahim Yakubu Seini
Abstract:
In this paper, the steady/transient MHD heat transfer within radiative porous channel due to convective boundary conditions is considered. The solution of the steady-state and that of the transient version were conveyed by Perturbation and Finite difference methods respectively. The heat transfer mechanism of the present work ascertains the influence of Biot number〖(B〗_i1), magnetizing parameter (M), radiation parameter(R), temperature difference, suction/injection(S) Grashof number (Gr) and time (t) on velocity (u), temperature(θ), skin friction(τ), and Nusselt number (Nu). The results established were discussed with the help of a line graph. It was found that the velocity, temperature, and skin friction decay with increasing suction/injection and magnetizing parameters while the Nusselt number upsurges with suction/injection at y = 0 and falls at y =1. The steady-state solution was in perfect agreement with the transient version for a significant value of time t. It is interesting to report that the Biot number has a cogent influence consequently, as its values upsurge the result of the present work slant the extended literature.Keywords: heat transfer, thermal radiation, porous channel, MHD, transient, convective boundary condition
Procedia PDF Downloads 1192911 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology
Authors: Anjian Chen, Joseph C. Chen
Abstract:
This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing
Procedia PDF Downloads 3912910 Poverty Dynamics in Thailand: Evidence from Household Panel Data
Authors: Nattabhorn Leamcharaskul
Abstract:
This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer
Procedia PDF Downloads 1122909 Optimization of Wire EDM Parameters for Fabrication of Micro Channels
Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg
Abstract:
Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro-scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the microchannels and to calculate the surface finish and material removal rate of microchannels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of a pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.Keywords: microchannels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), surface finish
Procedia PDF Downloads 4972908 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4682907 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 2832906 Prevalence of Cognitive Decline in Major Depressive Illness
Authors: U. B. Zubair, A. Kiyani
Abstract:
Introduction: Depressive illness predispose individuals to a lot of physical and mental health issues. Anxiety and substance use disorders have been studied widely as comorbidity. Biological symptoms also now considered part of the depressive spectrum. Cognitive abilities also decline or get affected and need to be looked into in detail in depressed patients. Objective: To determine the prevalence of cognitive decline among patients with major depressive illness and analyze the associated socio-demographic factors. Methods: 190 patients of major depressive illness were included in our study to determine the presence of cognitive decline among them. Depression was diagnosed by a consultant psychiatrist by using the ICD-10 criteria for major depressive disorder. British Columbia Cognitive Complaints Inventory (BC-CCI) was the psychometric tool used to determine the cognitive decline. Sociodemographic profile was recorded and the relationship of various factors with cognitive decline was also ascertained. Findings: 70% of the patients suffering from depression included in this study showed the presence of some degree of cognitive decline, while 30% did not show any evidence of cognitive decline when screened through BCCCI. Statistical testing revealed that the female gender was the only socio-demographic parameter linked significantly with the presence of cognitive decline. Conclusion: Decline in cognitive abilities was found in a significant number of patients suffering from major depression in our sample population. Screening for this parameter f mental function should be done in depression clinics to pick it early.Keywords: depression, cognitive decline, prevalence, socio-demographic factors
Procedia PDF Downloads 1432905 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media
Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh
Abstract:
In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid
Procedia PDF Downloads 1822904 Oscillatory Electroosmotic Flow of Power-Law Fluids in a Microchannel
Authors: Rubén Bãnos, José Arcos, Oscar Bautista, Federico Méndez
Abstract:
The Oscillatory electroosmotic flow (OEOF) in power law fluids through a microchannel is studied numerically. A time-dependent external electric field (AC) is suddenly imposed at the ends of the microchannel which induces the fluid motion. The continuity and momentum equations in the x and y direction for the flow field were simplified in the limit of the lubrication approximation theory (LAT), and then solved using a numerical scheme. The solution of the electric potential is based on the Debye-H¨uckel approximation which suggest that the surface potential is small,say, smaller than 0.025V and for a symmetric (z : z) electrolyte. Our results suggest that the velocity profiles across the channel-width are controlled by the following dimensionless parameters: the angular Reynolds number, Reω, the electrokinetic parameter, ¯κ, defined as the ratio of the characteristic length scale to the Debye length, the parameter λ which represents the ratio of the Helmholtz-Smoluchowski velocity to the characteristic length scale and the flow behavior index, n. Also, the results reveal that the velocity profiles become more and more non-uniform across the channel-width as the Reω and ¯κ are increased, so oscillatory OEOF can be really useful in micro-fluidic devices such as micro-mixers.Keywords: low zeta potentials, non-newtonian, oscillatory electroosmotic flow, power-law model
Procedia PDF Downloads 1672903 Hybrid Lateral-Directional Robust Flight Control with Propulsive Systems
Authors: Alexandra Monteiro, K. Bousson, Fernando J. O. Moreira, Ricardo Reis
Abstract:
Fixed-wing flying vehicles are usually controlled by means of control surfaces such as elevators, ailerons, and rudders. The failure of these systems may lead to severe or even fatal crashes. These failures resulted in increased popularity for research activities on propulsion control in the last decades. The present work deals with a hybrid control architecture in which the propulsion-controlled vehicle maintains its traditional control surfaces, addressing the issue of robust lateral-directional dynamics control. The challenges stem from the parameter uncertainties in the stability and control derivatives and some unknown terms in the flight dynamics model. Two approaches are implemented and tested: linear quadratic regulation with robustness characteristics and H∞ control. The problem is centered on roll-yaw controller design with full state-feedback, which is able to deal with a standalone propulsion control mode as well as a hybrid mode combining both propulsion control and conventional control surface concepts while maintaining the original flight maneuverability characteristics. The results for both controllers emphasized very good control performances; however, the H∞ controller showed higher stabilization rates and robustness albeit with a slightly higher control magnitude than using the linear quadratic regulator.Keywords: robust propulsion control, h-infinity control, lateral-directional flight dynamics, parameter uncertainties
Procedia PDF Downloads 1532902 Decay Analysis of 118Xe* Nucleus Formed in 28Si Induced Reaction
Authors: Manoj K. Sharma, Neha Grover
Abstract:
Dynamical cluster decay model (DCM) is applied to study the decay mechanism of 118Xe* nucleus in reference to recent data on 28Si + 90Zr → 118Xe* reaction, as an extension of our previous work on the dynamics of 112Xe* nucleus. It is relevant to mention here that DCM is based on collective clusterization approach, where emission probability of different decay paths such as evaporation residue (ER), intermediate mass fragments (IMF) and fission etc. is worked out on parallel scale. Calculations have been done over a wide range of center of mass energies with Ec.m. = 65 - 92 MeV. The evaporation residue (ER) cross-sections of 118Xe* compound nucleus are fitted in reference to available data, using spherical and quadrupole (β2) deformed choice of decaying fragments within the optimum orientations approach. It may be noted that our calculated cross-sections find decent agreement with experimental data and hence provide an opportunity to analyze the exclusive role of deformations in view of fragmentation behavior of 118Xe* nucleus. The possible contribution of IMF fragments is worked out and an extensive effort is being made to analyze the role of excitation energy, angular momentum, diffuseness parameter and level density parameter to have better understanding of the decay patterns governed in the dynamics of 28Si + 90Zr → 118Xe* reaction.Keywords: cross-sections, deformations, fragmentation, angular momentum
Procedia PDF Downloads 3162901 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed
Authors: M. P. Tripathi, Priti Tiwari
Abstract:
Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield
Procedia PDF Downloads 3772900 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America
Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias
Abstract:
There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.Keywords: economic growth, human capital, inequality, Latin-America
Procedia PDF Downloads 2242899 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning
Procedia PDF Downloads 4012898 Cooking Attributes of Rice Stored under Varying Temperature and Moisture Regimes
Authors: Lakshmi E. Jayachandran, Manepally Rajkumar, Pavuluri Srinivasa Rao
Abstract:
The objective of this research was to study the changes in eating quality of rice during storage under varying temperature and moisture regimes. Paddy (IR-36) with high amylose content (27%) was stored at a temperature range between 10 to 40°C and moisture content from 9 to 18% (d.b.) for 6 months. Drastic changes in color and parameters representing cooking qualities, cooked rice texture, and surface morphology occurred after 4 months of storage, especially at elevated temperature conditions. Head rice yield was stable throughout the storage except at extreme conditions of temperature and moisture content. Yellowing of rice was prominent at combinations of high temperature and moisture content, both of which had a synergistic effect on the b* values of rice. The cooking time, length expansion ratio and volume expansion ratio of all the rice samples increased with prolonged storage. The texture parameter, primarily, the hardness, cohesiveness, and adhesiveness of cooked rice samples were higher following storage at elevated temperature. Surface morphology was also significantly affected in stored rice as compared to fresh rice. Storage of rice at 10°C with a grain moisture content of 10% for 2 months gave cooked rice samples with good palatability and minimal cooking time. The temperature was found to be the most prominent storage parameter for rough rice, followed by moisture content and storage duration, influencing the quality of rice.Keywords: rice, cooking quality, storage, surface morphology
Procedia PDF Downloads 1762897 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing
Procedia PDF Downloads 1742896 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies
Authors: Yalda Zarnegarnia, Shari Messinger
Abstract:
Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.Keywords: biomarker, correlation, familial paired design, ROC curve
Procedia PDF Downloads 2382895 Feasibility Study of Wind Energy Potential in Turkey: Case Study of Catalca District in Istanbul
Authors: Mohammed Wadi, Bedri Kekezoglu, Mustafa Baysal, Mehmet Rida Tur, Abdulfetah Shobole
Abstract:
This paper investigates the technical evaluation of the wind potential for present and future investments in Turkey taking into account the feasibility of sites, installments, operation, and maintenance. This evaluation based on the hourly measured wind speed data for the three years 2008–2010 at 30 m height for Çatalca district. These data were obtained from national meteorology station in Istanbul–Republic of Turkey are analyzed in order to evaluate the feasibility of wind power potential and to assure supreme assortment of wind turbines installing for the area of interest. Furthermore, the data are extrapolated and analyzed at 60 m and 80 m regarding the variability of roughness factor. Weibull bi-parameter probability function is used to approximate monthly and annually wind potential and power density based on three calculation methods namely, the approximated, the graphical and the energy pattern factor methods. The annual mean wind power densities were to be 400.31, 540.08 and 611.02 W/m² for 30, 60, and 80 m heights respectively. Simulation results prove that the analyzed area is an appropriate place for constructing large-scale wind farms.Keywords: wind potential in Turkey, Weibull bi-parameter probability function, the approximated method, the graphical method, the energy pattern factor method, capacity factor
Procedia PDF Downloads 2572894 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models
Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton
Abstract:
Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets
Procedia PDF Downloads 4242893 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy
Procedia PDF Downloads 3712892 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application
Authors: Ritesh K. Shukla
Abstract:
Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.Keywords: comet assay, DNA degradation, forensic, molecular biology
Procedia PDF Downloads 1532891 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom
Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu
Abstract:
The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity
Procedia PDF Downloads 1882890 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method
Authors: Lina Wu, Jia Liu, Ye Li
Abstract:
The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map
Procedia PDF Downloads 2732889 Earnings vs Cash Flows: The Valuation Perspective
Authors: Megha Agarwal
Abstract:
The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)
Procedia PDF Downloads 3752888 Immunosupressive Effect of Chloroquine through the Inhibition of Myeloperoxidase
Authors: J. B. Minari, O. B. Oloyede
Abstract:
Polymorphonuclear neutrophils (PMNs) play a crucial role in a variety of infections caused by bacteria, fungi, and parasites. Indeed, the involvement of PMNs in host defence against Plasmodium falciparum is well documented both in vitro and in vivo. Many of the antimalarial drugs such as chloroquine used in the treatment of human malaria significantly reduce the immune response of the host in vitro and in vivo. Myeloperoxidase is the most abundant enzyme found in the polymorphonuclear neutrophil which plays a crucial role in its function. This study was carried out to investigate the effect of chloroquine on the enzyme. In investigating the effects of the drug on myeloperoxidase, the influence of concentration, pH, partition ratio estimation and kinetics of inhibition were studied. This study showed that chloroquine is concentration-dependent inhibitor of myeloperoxidase with an IC50 of 0.03 mM. Partition ratio estimation showed that 40 enzymatic turnover cycles are required for complete inhibition of myeloperoxidase in the presence of chloroquine. The influence of pH on the effect of chloroquine on the enzyme showed significant inhibition of myeloperoxidase at physiological pH. The kinetic inhibition studies showed that chloroquine caused a non-competitive inhibition with an inhibition constant Ki of 0.27mM. The results obtained from this study shows that chloroquine is a potent inhibitor of myeloperoxidase and it is capable of inactivating the enzyme. It is therefore considered that the inhibition of myeloperoxidase in the presence of chloroquine as revealed in this study may partly explain the impairment of polymorphonuclear neutrophil and consequent immunosuppression of the host defence system against secondary infections.Keywords: myeloperoxidase, chloroquine, inhibition, neutrophil, immune
Procedia PDF Downloads 3722887 Effect Analysis of an Improved Adaptive Speech Noise Reduction Algorithm in Online Communication Scenarios
Authors: Xingxing Peng
Abstract:
With the development of society, there are more and more online communication scenarios such as teleconference and online education. In the process of conference communication, the quality of voice communication is a very important part, and noise may cause the communication effect of participants to be greatly reduced. Therefore, voice noise reduction has an important impact on scenarios such as voice calls. This research focuses on the key technologies of the sound transmission process. The purpose is to maintain the audio quality to the maximum so that the listener can hear clearer and smoother sound. Firstly, to solve the problem that the traditional speech enhancement algorithm is not ideal when dealing with non-stationary noise, an adaptive speech noise reduction algorithm is studied in this paper. Traditional noise estimation methods are mainly used to deal with stationary noise. In this chapter, we study the spectral characteristics of different noise types, especially the characteristics of non-stationary Burst noise, and design a noise estimator module to deal with non-stationary noise. Noise features are extracted from non-speech segments, and the noise estimation module is adjusted in real time according to different noise characteristics. This adaptive algorithm can enhance speech according to different noise characteristics, improve the performance of traditional algorithms to deal with non-stationary noise, so as to achieve better enhancement effect. The experimental results show that the algorithm proposed in this chapter is effective and can better adapt to different types of noise, so as to obtain better speech enhancement effect.Keywords: speech noise reduction, speech enhancement, self-adaptation, Wiener filter algorithm
Procedia PDF Downloads 552886 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default
Procedia PDF Downloads 4532885 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 128