Search results for: apparent error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9808

Search results for: apparent error rate

9388 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model

Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David

Abstract:

The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.

Keywords: national development, granite, profitability assessment, ANN models

Procedia PDF Downloads 79
9387 Modelling Vehicle Fuel Consumption Utilising Artificial Neural Networks

Authors: Aydin Azizi, Aburrahman Tanira

Abstract:

The main source of energy used in this modern age is fossil fuels. There is a myriad of problems that come with the use of fossil fuels, out of which the issues with the greatest impact are its scarcity and the cost it imposes on the planet. Fossil fuels are the only plausible option for many vital functions and processes; the most important of these is transportation. Thus, using this source of energy wisely and as efficiently as possible is a must. The aim of this work was to explore utilising mathematical modelling and artificial intelligence techniques to enhance fuel consumption in passenger cars by focusing on the speed at which cars are driven. An artificial neural network with an error less than 0.05 was developed to be applied practically as to predict the rate of fuel consumption in vehicles.

Keywords: mathematical modeling, neural networks, fuel consumption, fossil fuel

Procedia PDF Downloads 379
9386 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 109
9385 Response Surface Methodology to Optimize the Performance of a Co2 Geothermal Thermosyphon

Authors: Badache Messaoud

Abstract:

Geothermal thermosyphons (GTs) are increasingly used in many heating and cooling geothermal applications owing to their high heat transfer performance. This paper proposes a response surface methodology (RSM) to investigate and optimize the performance of a CO2 geothermal thermosyphon. The filling ratio (FR), temperature, and flow rate of the heat transfer fluid are selected as the designing parameters, and heat transfer rate and effectiveness are adopted as response parameters (objective functions). First, a dedicated experimental GT test bench filled with CO2 was built and subjected to different test conditions. An RSM was used to establish corresponding models between the input parameters and responses. Various diagnostic tests were used to assess evaluate the quality and validity of the best-fit models, which explain respectively 98.9% and 99.2% of the output result’s variability. Overall, it is concluded from the RSM analysis that the heat transfer fluid inlet temperatures and the flow rate are the factors that have the greatest impact on heat transfer (Q) rate and effectiveness (εff), while the FR has only a slight effect on Q and no effect on εff. The maximal heat transfer rate and effectiveness achieved are 1.86 kW and 47.81%, respectively. Moreover, these optimal values are associated with different flow rate levels (mc level = 1 for Q and -1 for εff), indicating distinct operating regions for maximizing Q and εff within the GT system. Therefore, a multilevel optimization approach is necessary to optimize both the heat transfer rate and effectiveness simultaneously.

Keywords: geothermal thermosiphon, co2, Response surface methodology, heat transfer performance

Procedia PDF Downloads 47
9384 Progression Rate, Prevalence, Incidence of Black Band Disease on Stony (Scleractinia) in Barranglompo Island, South Sulawesi

Authors: Baso Hamdani, Arniati Massinai, Jamaluddin Jompa

Abstract:

Coral diseases are one of the factors affect reef degradation. This research had analysed the progression rate, incidence, and prevalence of Black Band Disease (BBD) on stony coral (Pachyseris sp.) in relation to the environmental parameters (pH, nitrate, phospate, Dissolved Organic Matter (DOM), and turbidity). The incidence of coral disease was measured weekly for 6 weeks using Belt Transect Method. The progression rate of BBD was measured manually. Furthermore, the prevalence and incidence of BBD were calculated each colonies infected. The relationship between environmental parameters and the progression rate, prevalence and incidence of BBD was analysed by Principal Component Analysis (PCA). The results showed the average of progression rate is 0,07 ± 0,02 cm/ hari. The prevalence of BBD increased from 0,92% - 19,73% in 7 weeks observation with the average incidence of new infected colonies coral 0,2 - 0,65 colony/day The environment factors which important were pH, Nitrate, Phospate, DOM, and Turbidity.

Keywords: progression rate, incidence, prevalence, Black Band Disease, Barranglompo

Procedia PDF Downloads 627
9383 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 301
9382 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 343
9381 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 134
9380 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 94
9379 Investigation on Scattered Dose Rate and Exposure Parameters during Diagnostic Examination Done with an Overcouch X-Ray Tube in Nigerian Teaching Hospital

Authors: Gbenga Martins, Christopher J. Olowookere, Lateef Bamidele, Kehinde O. Olatunji

Abstract:

The aims of this research are to measure the scattered dose rate during an X-ray examination in an X-ray room, compare the scattered dose rate with exposure parameters based on the body region examined, and examine the X-ray examination done with an over couch tube. The research was carried out using Gamma Scout software installation on the computer system (Laptop) to record the radiation counts, pulse rate, and dose rate. The measurement was employed by placing the detector at 900 to the incident X-ray. Proforma was used for the collection of patients’ data such as age, sex, examination type, and initial diagnosis. Data such as focus skin distance (FSD), body mass index (BMI), body thickness of the patients, the beam output (kVp) were collected at Obafemi Awolowo University, Ile-Ife, Western Nigeria. Total number of 136 patients was considered during this research. Dose rate range between 14.21 and 86.78 µSv/h for the plain abdominal region, 85.70 and 2.86 µSv/h for the lumbosacral region,1.3 µSv/yr and 3.6 µSv/yr in the pelvis region, 2.71 µSv/yr and 28.88 µSv/yr for leg region, 3.06 µSv/yr and 29.98 µSv/yr in hand region. The results of this study were compared with those of other studies carried out in other countries. The findings of this study indicated that the number of exposure parameters selected for each diagnostic examination contributed to the dose rate recorded. Therefore, these results call for a quality assurance program (QAP) in diagnostic X-ray units in Nigerian hospitals.

Keywords: X-radiation, exposure parameters, dose rate, pulse rate, number of counts, tube current, tube potential, diagnostic examination, scattered radiation

Procedia PDF Downloads 82
9378 Effect of Depressurization Rate in Batch Foaming of Porous Microcellular Polycarbonate on Microstructure Development

Authors: Indrajeet Singh, Abhishek Gandhi, Smita Mohanty, S. K. Nayak

Abstract:

In this article, a focused study has been performed to comprehend the influence of change in depressurization rate on microcellular polycarbonate foamed morphological attributes. The depressurization rate considered in this study were 0.5, 0.05, 0.01 and 0.005 MPa/sec and the physical blowing agent utilized was carbon dioxide owing to its high solubility in polycarbonate at room temperature. The study was performed on two distinct saturation pressures, i.e., 3 MPa and 6 MPa to understand if saturation pressure has any effects on it. It is reported that with increase in depressurization rate, a higher amount of thermodynamic instability was induced which resulted in generation of larger number of smaller sized cells. This article puts forward an understanding of how depressurization rate control could be well exploited during the batch foaming process to develop high quality microcellular foamed products with exceedingly well controlled cell size.

Keywords: depressurization, porous polymer, foaming, microcellular

Procedia PDF Downloads 243
9377 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 221
9376 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 134
9375 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 649
9374 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 148
9373 The Potential Use of Crude Palm Oil Liquid Wastes to Improve Nutrient Levels in Vegetable Plants

Authors: Hasan Basri Jumin

Abstract:

Application of crude palm oil waste combined to suitable concentration of benzyl-adenine give the significant effect to mean relative growth rate of vegetable plants and the same pattern in net assimilation rate crude palm oil waste has also significantly increased during 28 days old plants. Combination of treatment of suitable concentration of crude palm oil and benzyl adenine increased the growth and production of vegetable plants. The relative growth rate of vegetable plants was rapid 3 weeks after planting and gradually decreased at the end of the harvest time period. Combination of 400 mg.l-1 CPO with 1.0 mgl-1 till 10mgl-1 BA increased the Mean Relative Growth Rate (MRGR), Net assimilation rate (NAR), Leaf area and dry weight of Brassica juncea, Brassica oleraceae and Lactuca sativa.

Keywords: benzyladenine, crude-palm-oil, nutrient, vegetable, waste

Procedia PDF Downloads 164
9372 Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes

Authors: Septimia Sarbu

Abstract:

The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.

Keywords: generalized entropies, Sharma-Mittal entropy rate, Gaussian processes, eigenvalues of the covariance matrix, squeeze theorem

Procedia PDF Downloads 485
9371 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 104
9370 Present State of Local Public Transportation Service in Local Municipalities of Japan and Its Effects on Population

Authors: Akiko Kondo, Akio Kondo

Abstract:

We are facing regional problems to low birth rate and longevity in Japan. Under this situation, there are some local municipalities which lose their vitality. The aims of this study are to clarify the present state of local public transportation services in local municipalities and relation between local public transportation services and population quantitatively. We conducted a questionnaire survey concerning regional agenda in all local municipalities in Japan. We obtained responses concerning the present state of convenience in use of public transportation and local public transportation services. Based on the data gathered from the survey, it is apparent that we should some sort of measures concerning public transportation services. Convenience in use of public transportation becomes an object of public concern in many rural regions. It is also clarified that some local municipalities introduce a demand bus for the purpose of promotion of administrative and financial efficiency. They also introduce a demand taxi in order to secure transportation to weak people in transportation and eliminate of blank area related to public transportation services. In addition, we construct a population model which includes explanatory variables of present states of local public transportation services. From this result, we can clarify the relation between public transportation services and population quantitatively.

Keywords: public transportation, local municipality, regional analysis, regional issue

Procedia PDF Downloads 373
9369 Clinical Outcome after in Vitro Fertilization in Women Aged 40 Years and Above: Reasonable Cut-Off Age for Successful Pregnancy

Authors: Eun Jeong Yu, Inn Soo Kang, Tae Ki Yoon, Mi Kyoung Koong

Abstract:

Advanced female age is associated with higher cycle cancelation rates, lower clinical pregnancy rate, increased miscarriage and aneuploidy rates in IVF (In Vitro Fertilization) cycles. This retrospective cohort study was conducted at a Cha Fertility Center, Seoul Station. All fresh non-donor IVF cycles performed in women aged 40 years and above from January 2016 to December 2016 were reviewed. Donor/recipient treatment, PGD/PGS (Preimplantation Genetic Diagnosis/ Preimplantation Genetic Screening) were excluded from analysis. Of the 1,166 cycles from 753 women who completed ovulation induction, 1,047 were appropriate for the evaluation according to inclusion and exclusion criterion. IVF cycles were categorized according to age and grouped into the following 1-year age groups: 40, 41, 42, 43, 44, 45 and > 46. The mean age of patients was 42.4 ± 1.8 years. The median AMH (Anti-Mullerian Hormone) level was 1.2 ± 1.5 ng/mL. The mean number of retrieved oocytes was 4.9 ± 4.3. The clinical pregnancy rate and live birth rate in women > 40 years significantly decreased with each year of advancing age (p < 0.001). The clinical pregnancy rate decreased from 21% at the age of 40 years to 0% at ages above 45 years. Live birth rate decreased from 12.3% to 0%, respectively. There were no clinical pregnancy outcomes among 95 patients aged above 45 years of age. The overall miscarriage rate was 40.7% (range, 36.7%-70%). The transfer of at least one good quality embryo was associated with about 4-9% increased chance of a clinical pregnancy rate. Therefore, IVF in old age women less than 46 had a reasonable chance for successful pregnancy outcomes especially when good quality embryo is transferred.

Keywords: advanced maternal age, in vitro fertilization, pregnancy rate, live birth rate

Procedia PDF Downloads 121
9368 Experimental Investigation and Constitutive Modeling of Volume Strain under Uniaxial Strain Rate Jump Test in HDPE

Authors: Rida B. Arieby, Hameed N. Hameed

Abstract:

In this work, tensile tests on high density polyethylene have been carried out under various constant strain rate and strain rate jump tests. The dependency of the true stress and specially the variation of volume strain have been investigated, the volume strain due to the phenomena of damage was determined in real time during the tests by an optical extensometer called Videotraction. A modified constitutive equations, including strain rate and damage effects, are proposed, such a model is based on a non-equilibrium thermodynamic approach called (DNLR). The ability of the model to predict the complex nonlinear response of this polymer is examined by comparing the model simulation with the available experimental data, which demonstrate that this model can represent the deformation behavior of the polymer reasonably well.

Keywords: strain rate jump tests, volume strain, high density polyethylene, large strain, thermodynamics approach

Procedia PDF Downloads 239
9367 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 54
9366 Effect of Traffic Composition on Delay and Saturation Flow at Signal Controlled Intersections

Authors: Arpita Saha, Apoorv Jain, Satish Chandra, Indrajit Ghosh

Abstract:

Level of service at a signal controlled intersection is directly measured from the delay. Similarly, saturation flow rate is a fundamental parameter to measure the intersection capacity. The present study calculates vehicle arrival rate, departure rate, and queue length for every five seconds interval in each cycle. Based on the queue lengths, the total delay of the cycle has been calculated using Simpson’s 1/3rd rule. Saturation flow has been estimated in terms of veh/hr of green/lane for every five seconds interval of the green period until at least three vehicles are left to cross the stop line. Vehicle composition shows an immense effect on total delay and saturation flow rate. The increase in two-wheeler proportion increases the saturation flow rate and reduces the total delay per vehicle significantly. Additionally, an increase in the heavy vehicle proportion reduces the saturation flow rate and increases the total delay for each vehicle.

Keywords: delay, saturation flow, signalised intersection, vehicle composition

Procedia PDF Downloads 441
9365 Thermal Efficiency Analysis and Optimal of Feed Water Heater for Mae Moh Thermal Power Plant

Authors: Khomkrit Mongkhuntod, Chatchawal Chaichana, Atipoang Nuntaphan

Abstract:

Feed Water Heater is the important equipment for thermal power plant. The heating temperature from feed heating process is an impact to power plant efficiency or heat rate. Normally, the degradation of feed water heater that operated for a long time is effect to decrease plant efficiency or increase plant heat rate. For Mae Moh power plant, each unit operated more than 20 years. The degradation of the main equipment is effect of planting efficiency or heat rate. From the efficiency and heat rate analysis, Mae Moh power plant operated in high heat rate more than the commissioning period. Some of the equipment were replaced for improving plant efficiency and plant heat rates such as HP turbine and LP turbine that the result is increased plant efficiency by 5% and decrease plant heat rate by 1%. For the target of power generation plan that Mae Moh power plant must be operated more than 10 years. These work is focus on thermal efficiency analysis of feed water heater to compare with the commissioning data for find the way to improve the feed water heater efficiency that may effect to increase plant efficiency or decrease plant heat rate by use heat balance model simulation and economic value add (EVA) method to study the investment for replacing the new feed water heater and analyze how this project can stay above the break-even point to make the project decision.

Keywords: feed water heater, power plant efficiency, plant heat rate, thermal efficiency analysis

Procedia PDF Downloads 340
9364 Seasonal and Monthly Field Soil Respiration Rate and Litter Fall Amounts of Kasuga-Yama Hill Primeval Forest

Authors: Ayuko Itsuki, Sachiyo Aburatani

Abstract:

The seasonal (January, April, July and October) and monthly soil respiration rate and the monthly litter fall amounts were examined in the laurel-leaved (B_B-1) and Cryptomeria japonica (B_B-2 and PW) forests in the Kasugayama Hill Primeval Forest (Nara, Japan). The change of the seasonal soil respiration rate corresponded to that of the soil temperature. The soil respiration rate was higher in October when fresh organic matter was supplied in the forest floor than in April in spite of the same temperature. The seasonal soil respiration rate of B_B-1 was higher than that of B_B-2, which corresponded to more numbers of bacteria and fungi counted by the dilution plate method and by the direct count method by microscopy in B_B-1 than that of B_B-2. The seasonal soil respiration rate of B_B-2 was higher than that of PW, which corresponded to more microbial biomass by the direct count method by microscopy in B_B-2 than that of PW. The correlation coefficient with the seasonal soil respiration and the soil temperature was higher than that of the monthly soil respiration. The soil respiration carbon was more than the litter fall carbon. It was suggested that the soil respiration included in the carbon dioxide which was emitted by the plant root and soil animal, or that the litter fall supplied to the forest floor included in animal and plant litter.

Keywords: field soil respiration rate, forest soil, litter fall, mineralization rate

Procedia PDF Downloads 266
9363 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 629
9362 The Effect of Deformation Activation Volume, Strain Rate Sensitivity and Processing Temperature of Grain Size Variants

Authors: P. B. Sob, A. A. Alugongo, T. B. Tengen

Abstract:

The activation volume of 6082T6 aluminum is investigated at different temperatures on grain size variants. The deformation activation volume was computed on the basis of the relationship between the Boltzmann’s constant k, the testing temperatures, the material strain rate sensitivity and the material yield stress of grain size variants. The material strain rate sensitivity is computed as a function of yield stress and strain rate of grain size variants. The effect of the material strain rate sensitivity and the deformation activation volume of 6082T6 aluminum at different temperatures of 3-D grain are discussed. It is shown that the strain rate sensitivities and activation volume are negative for the grain size variants during the deformation of nanostructured materials. It is also observed that the activation volume vary in different ways with the equivalent radius, semi minor axis radius, semi major axis radius and major axis radius. From the obtained results it is shown that the variation of activation volume increased and decreased with the testing temperature. It was revealed that, increased in strain rate sensitivity led to decrease in activation volume whereas increased in activation volume led to decrease in strain rate sensitivity.

Keywords: nanostructured materials, grain size variants, temperature, yield stress, strain rate sensitivity, activation volume

Procedia PDF Downloads 231
9361 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 168
9360 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 255
9359 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 168