Search results for: error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9317

Search results for: error rate

8957 The Effect of Heart Rate and Valence of Emotions on Perceived Intensity of Emotion

Authors: Madeleine Nicole G. Bernardo, Katrina T. Feliciano, Marcelo Nonato A. Nacionales III, Diane Frances M. Peralta, Denise Nicole V. Profeta

Abstract:

This study aims to find out if heart rate variability and valence of emotion have an effect on perceived intensity of emotion. Psychology undergraduates (N = 60) from the University of the Philippines Diliman were shown 10 photographs from the Japanese Female Facial Expression (JAFFE) Database, along with a corresponding questionnaire with a Likert scale on perceived intensity of emotion. In this 3 x 2 mixed subjects factorial design, each group was either made to do a simple exercise prior to answering the questionnaire in order to increase the heart rate, listen to a heart rate of 120 bpm, or colour a drawing to keep the heart rate stable. After doing the activity, the participants then answered the questionnaire, providing a rating of the faces according to the participants’ perceived emotional intensity on the photographs. The photographs presented were either of positive or negative emotional valence. The results of the experiment showed that neither an induced fast heart rate or perceived fast heart rate had any significant effect on the participants’ perceived intensity of emotion. There was also no interaction effect of heart rate variability and valence of emotion. The insignificance of results was explained by the Philippines’ high context culture, accompanied by the prevalence of both intensely valenced positive and negative emotions in Philippine society. Insignificance in the effects were also attributed to the Cannon-Bard theory, Schachter-Singer theory and various methodological limitations.

Keywords: heart rate variability, perceived intensity of emotion, Philippines , valence of emotion

Procedia PDF Downloads 222
8956 Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV

Authors: Manjit Singh

Abstract:

Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.

Keywords: ECG (electrocardiogram), heart rate variability (HRV), multiscale entropy, sampling frequency

Procedia PDF Downloads 240
8955 Aggregate Supply Response of Some Livestock Commodities in Algeria: Cointegration- Vector Error Correction Model Approach

Authors: Amine M. Benmehaia, Amine Oulmane

Abstract:

The supply response of agricultural commodities to changes in price incentives is an important issue for the success of any policy reform in the agricultural sector. This study aims to quantify the responsiveness of producers of some livestock commodities to price incentives in Algerian context. Time series analysis is used on annual data for a period of 52 years (1966-2018). Both co-integration and vector error correction model (VECM) are used through the Nerlove model of partial adjustment. The study attempts to determine the long-run and short-run relationships along with the magnitudes of disequilibria in the selected commodities. Results show that the short-run price elasticities are low in cow and sheep meat sectors (8.7 and 8% respectively), while their respective long-run elasticities are 16.5 and 10.5, whereas eggs and milk have very high short-run price elasticities (82 and 90% respectively) with long-run elasticities of 40 and 46 respectively. The error correction coefficient, reflecting the speed of adjustment towards the long-run equilibrium, is statistically significant and have the expected negative sign. Its estimates are 12.7 for cow meat, 33.5 for sheep meat, 46.7 for eggs and 8.4 for milk. It seems that cow meat and milk producers have a weak feedback of about 12.7% and 8.4% respectively of the previous year's disequilibrium from the long-run price elasticity, whereas sheep meat and eggs producers adjust to correct long run disequilibrium with a high speed of adjustment (33.5% and 46.7 % respectively). The implication of this is that much more in-depth research is needed to identify those factors that affect agricultural supply and to describe the effect of factors that shift supply in response to price incentives. This could provide valuable information for government in the use of appropriate policy measures.

Keywords: Algeria, cointegration, livestock, supply response, vector error correction model

Procedia PDF Downloads 106
8954 Enhancement of coupler-based delay line filters modulation techniques using optical wireless channel and amplifiers at 100 Gbit/s

Authors: Divya Sisodiya, Deepika Sipal

Abstract:

Optical wireless communication (OWC) is a relatively new technology in optical communication systems that allows for high-speed wireless optical communication. This research focuses on developing a cost-effective OWC system using a hybrid configuration of optical amplifiers. In addition to using EDFA amplifiers, a comparison study was conducted to determine which modulation technique is more effective for communication. This research examines the performance of an OWC system based on ASK and PSK modulation techniques by varying OWC parameters under various atmospheric conditions such as rain, mist, haze, and snow. Finally, the simulation results are discussed and analyzed.

Keywords: OWC, bit error rate, amplitude shift keying, phase shift keying, attenuation, amplifiers

Procedia PDF Downloads 107
8953 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 274
8952 Cellular Traffic Prediction through Multi-Layer Hybrid Network

Authors: Supriya H. S., Chandrakala B. M.

Abstract:

Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.

Keywords: MLHN, network traffic prediction

Procedia PDF Downloads 57
8951 Response Surface Methodology to Optimize the Performance of a Co2 Geothermal Thermosyphon

Authors: Badache Messaoud

Abstract:

Geothermal thermosyphons (GTs) are increasingly used in many heating and cooling geothermal applications owing to their high heat transfer performance. This paper proposes a response surface methodology (RSM) to investigate and optimize the performance of a CO2 geothermal thermosyphon. The filling ratio (FR), temperature, and flow rate of the heat transfer fluid are selected as the designing parameters, and heat transfer rate and effectiveness are adopted as response parameters (objective functions). First, a dedicated experimental GT test bench filled with CO2 was built and subjected to different test conditions. An RSM was used to establish corresponding models between the input parameters and responses. Various diagnostic tests were used to assess evaluate the quality and validity of the best-fit models, which explain respectively 98.9% and 99.2% of the output result’s variability. Overall, it is concluded from the RSM analysis that the heat transfer fluid inlet temperatures and the flow rate are the factors that have the greatest impact on heat transfer (Q) rate and effectiveness (εff), while the FR has only a slight effect on Q and no effect on εff. The maximal heat transfer rate and effectiveness achieved are 1.86 kW and 47.81%, respectively. Moreover, these optimal values are associated with different flow rate levels (mc level = 1 for Q and -1 for εff), indicating distinct operating regions for maximizing Q and εff within the GT system. Therefore, a multilevel optimization approach is necessary to optimize both the heat transfer rate and effectiveness simultaneously.

Keywords: geothermal thermosiphon, co2, Response surface methodology, heat transfer performance

Procedia PDF Downloads 42
8950 Progression Rate, Prevalence, Incidence of Black Band Disease on Stony (Scleractinia) in Barranglompo Island, South Sulawesi

Authors: Baso Hamdani, Arniati Massinai, Jamaluddin Jompa

Abstract:

Coral diseases are one of the factors affect reef degradation. This research had analysed the progression rate, incidence, and prevalence of Black Band Disease (BBD) on stony coral (Pachyseris sp.) in relation to the environmental parameters (pH, nitrate, phospate, Dissolved Organic Matter (DOM), and turbidity). The incidence of coral disease was measured weekly for 6 weeks using Belt Transect Method. The progression rate of BBD was measured manually. Furthermore, the prevalence and incidence of BBD were calculated each colonies infected. The relationship between environmental parameters and the progression rate, prevalence and incidence of BBD was analysed by Principal Component Analysis (PCA). The results showed the average of progression rate is 0,07 ± 0,02 cm/ hari. The prevalence of BBD increased from 0,92% - 19,73% in 7 weeks observation with the average incidence of new infected colonies coral 0,2 - 0,65 colony/day The environment factors which important were pH, Nitrate, Phospate, DOM, and Turbidity.

Keywords: progression rate, incidence, prevalence, Black Band Disease, Barranglompo

Procedia PDF Downloads 619
8949 Modelling Vehicle Fuel Consumption Utilising Artificial Neural Networks

Authors: Aydin Azizi, Aburrahman Tanira

Abstract:

The main source of energy used in this modern age is fossil fuels. There is a myriad of problems that come with the use of fossil fuels, out of which the issues with the greatest impact are its scarcity and the cost it imposes on the planet. Fossil fuels are the only plausible option for many vital functions and processes; the most important of these is transportation. Thus, using this source of energy wisely and as efficiently as possible is a must. The aim of this work was to explore utilising mathematical modelling and artificial intelligence techniques to enhance fuel consumption in passenger cars by focusing on the speed at which cars are driven. An artificial neural network with an error less than 0.05 was developed to be applied practically as to predict the rate of fuel consumption in vehicles.

Keywords: mathematical modeling, neural networks, fuel consumption, fossil fuel

Procedia PDF Downloads 373
8948 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 298
8947 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model

Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David

Abstract:

The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.

Keywords: national development, granite, profitability assessment, ANN models

Procedia PDF Downloads 72
8946 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 105
8945 Investigation on Scattered Dose Rate and Exposure Parameters during Diagnostic Examination Done with an Overcouch X-Ray Tube in Nigerian Teaching Hospital

Authors: Gbenga Martins, Christopher J. Olowookere, Lateef Bamidele, Kehinde O. Olatunji

Abstract:

The aims of this research are to measure the scattered dose rate during an X-ray examination in an X-ray room, compare the scattered dose rate with exposure parameters based on the body region examined, and examine the X-ray examination done with an over couch tube. The research was carried out using Gamma Scout software installation on the computer system (Laptop) to record the radiation counts, pulse rate, and dose rate. The measurement was employed by placing the detector at 900 to the incident X-ray. Proforma was used for the collection of patients’ data such as age, sex, examination type, and initial diagnosis. Data such as focus skin distance (FSD), body mass index (BMI), body thickness of the patients, the beam output (kVp) were collected at Obafemi Awolowo University, Ile-Ife, Western Nigeria. Total number of 136 patients was considered during this research. Dose rate range between 14.21 and 86.78 µSv/h for the plain abdominal region, 85.70 and 2.86 µSv/h for the lumbosacral region,1.3 µSv/yr and 3.6 µSv/yr in the pelvis region, 2.71 µSv/yr and 28.88 µSv/yr for leg region, 3.06 µSv/yr and 29.98 µSv/yr in hand region. The results of this study were compared with those of other studies carried out in other countries. The findings of this study indicated that the number of exposure parameters selected for each diagnostic examination contributed to the dose rate recorded. Therefore, these results call for a quality assurance program (QAP) in diagnostic X-ray units in Nigerian hospitals.

Keywords: X-radiation, exposure parameters, dose rate, pulse rate, number of counts, tube current, tube potential, diagnostic examination, scattered radiation

Procedia PDF Downloads 78
8944 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 340
8943 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 91
8942 Effect of Depressurization Rate in Batch Foaming of Porous Microcellular Polycarbonate on Microstructure Development

Authors: Indrajeet Singh, Abhishek Gandhi, Smita Mohanty, S. K. Nayak

Abstract:

In this article, a focused study has been performed to comprehend the influence of change in depressurization rate on microcellular polycarbonate foamed morphological attributes. The depressurization rate considered in this study were 0.5, 0.05, 0.01 and 0.005 MPa/sec and the physical blowing agent utilized was carbon dioxide owing to its high solubility in polycarbonate at room temperature. The study was performed on two distinct saturation pressures, i.e., 3 MPa and 6 MPa to understand if saturation pressure has any effects on it. It is reported that with increase in depressurization rate, a higher amount of thermodynamic instability was induced which resulted in generation of larger number of smaller sized cells. This article puts forward an understanding of how depressurization rate control could be well exploited during the batch foaming process to develop high quality microcellular foamed products with exceedingly well controlled cell size.

Keywords: depressurization, porous polymer, foaming, microcellular

Procedia PDF Downloads 235
8941 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 132
8940 The Potential Use of Crude Palm Oil Liquid Wastes to Improve Nutrient Levels in Vegetable Plants

Authors: Hasan Basri Jumin

Abstract:

Application of crude palm oil waste combined to suitable concentration of benzyl-adenine give the significant effect to mean relative growth rate of vegetable plants and the same pattern in net assimilation rate crude palm oil waste has also significantly increased during 28 days old plants. Combination of treatment of suitable concentration of crude palm oil and benzyl adenine increased the growth and production of vegetable plants. The relative growth rate of vegetable plants was rapid 3 weeks after planting and gradually decreased at the end of the harvest time period. Combination of 400 mg.l-1 CPO with 1.0 mgl-1 till 10mgl-1 BA increased the Mean Relative Growth Rate (MRGR), Net assimilation rate (NAR), Leaf area and dry weight of Brassica juncea, Brassica oleraceae and Lactuca sativa.

Keywords: benzyladenine, crude-palm-oil, nutrient, vegetable, waste

Procedia PDF Downloads 163
8939 Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes

Authors: Septimia Sarbu

Abstract:

The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.

Keywords: generalized entropies, Sharma-Mittal entropy rate, Gaussian processes, eigenvalues of the covariance matrix, squeeze theorem

Procedia PDF Downloads 479
8938 Clinical Outcome after in Vitro Fertilization in Women Aged 40 Years and Above: Reasonable Cut-Off Age for Successful Pregnancy

Authors: Eun Jeong Yu, Inn Soo Kang, Tae Ki Yoon, Mi Kyoung Koong

Abstract:

Advanced female age is associated with higher cycle cancelation rates, lower clinical pregnancy rate, increased miscarriage and aneuploidy rates in IVF (In Vitro Fertilization) cycles. This retrospective cohort study was conducted at a Cha Fertility Center, Seoul Station. All fresh non-donor IVF cycles performed in women aged 40 years and above from January 2016 to December 2016 were reviewed. Donor/recipient treatment, PGD/PGS (Preimplantation Genetic Diagnosis/ Preimplantation Genetic Screening) were excluded from analysis. Of the 1,166 cycles from 753 women who completed ovulation induction, 1,047 were appropriate for the evaluation according to inclusion and exclusion criterion. IVF cycles were categorized according to age and grouped into the following 1-year age groups: 40, 41, 42, 43, 44, 45 and > 46. The mean age of patients was 42.4 ± 1.8 years. The median AMH (Anti-Mullerian Hormone) level was 1.2 ± 1.5 ng/mL. The mean number of retrieved oocytes was 4.9 ± 4.3. The clinical pregnancy rate and live birth rate in women > 40 years significantly decreased with each year of advancing age (p < 0.001). The clinical pregnancy rate decreased from 21% at the age of 40 years to 0% at ages above 45 years. Live birth rate decreased from 12.3% to 0%, respectively. There were no clinical pregnancy outcomes among 95 patients aged above 45 years of age. The overall miscarriage rate was 40.7% (range, 36.7%-70%). The transfer of at least one good quality embryo was associated with about 4-9% increased chance of a clinical pregnancy rate. Therefore, IVF in old age women less than 46 had a reasonable chance for successful pregnancy outcomes especially when good quality embryo is transferred.

Keywords: advanced maternal age, in vitro fertilization, pregnancy rate, live birth rate

Procedia PDF Downloads 118
8937 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 214
8936 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 133
8935 Experimental Investigation and Constitutive Modeling of Volume Strain under Uniaxial Strain Rate Jump Test in HDPE

Authors: Rida B. Arieby, Hameed N. Hameed

Abstract:

In this work, tensile tests on high density polyethylene have been carried out under various constant strain rate and strain rate jump tests. The dependency of the true stress and specially the variation of volume strain have been investigated, the volume strain due to the phenomena of damage was determined in real time during the tests by an optical extensometer called Videotraction. A modified constitutive equations, including strain rate and damage effects, are proposed, such a model is based on a non-equilibrium thermodynamic approach called (DNLR). The ability of the model to predict the complex nonlinear response of this polymer is examined by comparing the model simulation with the available experimental data, which demonstrate that this model can represent the deformation behavior of the polymer reasonably well.

Keywords: strain rate jump tests, volume strain, high density polyethylene, large strain, thermodynamics approach

Procedia PDF Downloads 230
8934 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 647
8933 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 144
8932 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 99
8931 Effect of Traffic Composition on Delay and Saturation Flow at Signal Controlled Intersections

Authors: Arpita Saha, Apoorv Jain, Satish Chandra, Indrajit Ghosh

Abstract:

Level of service at a signal controlled intersection is directly measured from the delay. Similarly, saturation flow rate is a fundamental parameter to measure the intersection capacity. The present study calculates vehicle arrival rate, departure rate, and queue length for every five seconds interval in each cycle. Based on the queue lengths, the total delay of the cycle has been calculated using Simpson’s 1/3rd rule. Saturation flow has been estimated in terms of veh/hr of green/lane for every five seconds interval of the green period until at least three vehicles are left to cross the stop line. Vehicle composition shows an immense effect on total delay and saturation flow rate. The increase in two-wheeler proportion increases the saturation flow rate and reduces the total delay per vehicle significantly. Additionally, an increase in the heavy vehicle proportion reduces the saturation flow rate and increases the total delay for each vehicle.

Keywords: delay, saturation flow, signalised intersection, vehicle composition

Procedia PDF Downloads 438
8930 Thermal Efficiency Analysis and Optimal of Feed Water Heater for Mae Moh Thermal Power Plant

Authors: Khomkrit Mongkhuntod, Chatchawal Chaichana, Atipoang Nuntaphan

Abstract:

Feed Water Heater is the important equipment for thermal power plant. The heating temperature from feed heating process is an impact to power plant efficiency or heat rate. Normally, the degradation of feed water heater that operated for a long time is effect to decrease plant efficiency or increase plant heat rate. For Mae Moh power plant, each unit operated more than 20 years. The degradation of the main equipment is effect of planting efficiency or heat rate. From the efficiency and heat rate analysis, Mae Moh power plant operated in high heat rate more than the commissioning period. Some of the equipment were replaced for improving plant efficiency and plant heat rates such as HP turbine and LP turbine that the result is increased plant efficiency by 5% and decrease plant heat rate by 1%. For the target of power generation plan that Mae Moh power plant must be operated more than 10 years. These work is focus on thermal efficiency analysis of feed water heater to compare with the commissioning data for find the way to improve the feed water heater efficiency that may effect to increase plant efficiency or decrease plant heat rate by use heat balance model simulation and economic value add (EVA) method to study the investment for replacing the new feed water heater and analyze how this project can stay above the break-even point to make the project decision.

Keywords: feed water heater, power plant efficiency, plant heat rate, thermal efficiency analysis

Procedia PDF Downloads 338
8929 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 44
8928 Seasonal and Monthly Field Soil Respiration Rate and Litter Fall Amounts of Kasuga-Yama Hill Primeval Forest

Authors: Ayuko Itsuki, Sachiyo Aburatani

Abstract:

The seasonal (January, April, July and October) and monthly soil respiration rate and the monthly litter fall amounts were examined in the laurel-leaved (B_B-1) and Cryptomeria japonica (B_B-2 and PW) forests in the Kasugayama Hill Primeval Forest (Nara, Japan). The change of the seasonal soil respiration rate corresponded to that of the soil temperature. The soil respiration rate was higher in October when fresh organic matter was supplied in the forest floor than in April in spite of the same temperature. The seasonal soil respiration rate of B_B-1 was higher than that of B_B-2, which corresponded to more numbers of bacteria and fungi counted by the dilution plate method and by the direct count method by microscopy in B_B-1 than that of B_B-2. The seasonal soil respiration rate of B_B-2 was higher than that of PW, which corresponded to more microbial biomass by the direct count method by microscopy in B_B-2 than that of PW. The correlation coefficient with the seasonal soil respiration and the soil temperature was higher than that of the monthly soil respiration. The soil respiration carbon was more than the litter fall carbon. It was suggested that the soil respiration included in the carbon dioxide which was emitted by the plant root and soil animal, or that the litter fall supplied to the forest floor included in animal and plant litter.

Keywords: field soil respiration rate, forest soil, litter fall, mineralization rate

Procedia PDF Downloads 264