Search results for: probability estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2939

Search results for: probability estimation

419 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 143
418 Evaluating Value of Users' Personal Information Based on Cost-Benefit Analysis

Authors: Jae Hyun Park, Sangmi Chai, Minkyun Kim

Abstract:

As users spend more time on the Internet, the probability of their personal information being exposed has been growing. This research has a main purpose of investigating factors and examining relationships when Internet users recognize their value of private information with a perspective of an economic asset. The study is targeted on Internet users, and the value of their private information will be converted into economic figures. Moreover, how economic value changes in relation with individual attributes, dealer’s traits, circumstantial properties will be studied. In this research, the changes in factors on private information value responding to different situations will be analyzed in an economic perspective. Additionally, this study examines the associations between users’ perceived risk and value of their personal information. By using the cost-benefit analysis framework, the hypothesis that the user’s sense in private information value can be influenced by individual attributes and situational properties will be tested. Therefore, this research will attempt to provide answers for three research objectives. First, this research will identify factors that affect value recognition of users’ personal information. Second, it provides evidences that there are differences on information system users’ economic value of information responding to personal, trade opponent, and situational attributes. Third, it investigates the impact of those attributes on individuals’ perceived risk. Based on the assumption that personal, trade opponent and situation attributes make an impact on the users’ value recognition on private information, this research will present the understandings on the different impacts of those attributes in recognizing the value of information with the economic perspective and prove the associative relationships between perceived risk and decision on the value of users’ personal information. In order to validate our research model, this research used the regression methodology. Our research results support that information breach experience and information security systems is associated with users’ perceived risk. Information control and uncertainty are also related to users’ perceived risk. Therefore, users’ perceived risk is considered as a significant factor on evaluating the value of personal information. It can be differentiated by trade opponent and situational attributes. This research presents new perspective on evaluating the value of users’ personal information in the context of perceived risk, personal, trade opponent and situational attributes. It fills the gap in the literature by providing how users’ perceived risk are associated with personal, trade opponent and situation attitudes in conducting business transactions with providing personal information. It adds to previous literature that the relationship exists between perceived risk and the value of users’ private information in the economic perspective. It also provides meaningful insights to the managers that in order to minimize the cost of information breach, managers need to recognize the value of individuals’ personal information and decide the proper amount of investments on protecting users’ online information privacy.

Keywords: private information, value, users, perceived risk, online information privacy, attributes

Procedia PDF Downloads 200
417 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 221
416 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 110
415 Constraints on Source Rock Organic Matter Biodegradation in the Biogenic Gas Fields in the Sanhu Depression, Qaidam Basin, Northwestern China: A Study of Compound Concentration and Concentration Ratio Changes Using GC-MS Data

Authors: Mengsha Yin

Abstract:

Extractable organic matter (EOM) from thirty-six biogenic gas source rocks from the Sanhu Depression in Qaidam Basin in northwestern China were obtained via Soxhlet extraction. Twenty-nine of them were conducted SARA (Saturates, Aromatics, Resins and Asphaltenes) separation for bulk composition analysis. Saturated and aromatic fractions of all the extractions were analyzed by Gas Chromatography-Mass Spectrometry (GC-MS) to investigate the compound compositions. More abundant n-alkanes, naphthalene, phenanthrene, dibenzothiophene and their alkylated products occur in samples in shallower depths. From 2000m downward, concentrations of these compounds increase sharply, and concentration ratios of more-over-less biodegradation susceptible compounds coincidently decrease dramatically. ∑iC15-16, 18-20/∑nC15-16, 18-20 and hopanoids/∑n-alkanes concentration ratios and mono- and tri-aromatic sterane concentrations and concentration ratios frequently fluctuate with depth rather than trend with it, reflecting effects from organic input and paleoenvironments other than biodegradation. Saturated and aromatic compound distributions on the saturates and aromatics total ion chromatogram (TIC) traces of samples display different degrees of biodegradation. Dramatic and simultaneous variations in compound concentrations and their ratios at 2000m and their changes with depth underneath cooperatively justified the crucial control of burial depth on organic matter biodegradation scales in source rocks and prompted the proposition that 2000m is the bottom depth boundary for active microbial activities in this study. The study helps to better curb the conditions where effective source rocks occur in terms of depth in the Sanhu biogenic gas fields and calls for additional attention to source rock pore size estimation during biogenic gas source rock appraisals.

Keywords: pore space, Sanhu depression, saturated and aromatic hydrocarbon compound concentration, source rock organic matter biodegradation, total ion chromatogram

Procedia PDF Downloads 126
414 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 412
413 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor

Authors: Pooja Seth, Shruti Aggarwal

Abstract:

Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.

Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method

Procedia PDF Downloads 383
412 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 169
411 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 164
410 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 252
409 Drivers of Liking: Probiotic Petit Suisse Cheese

Authors: Helena Bolini, Erick Esmerino, Adriano Cruz, Juliana Paixao

Abstract:

The currently concern for health has increased demand for low-calorie ingredients and functional foods as probiotics. Understand the reasons that infer on food choice, besides a challenging task, it is important step for development and/or reformulation of existing food products. The use of appropriate multivariate statistical techniques, such as External Preference Map (PrefMap), associated with regression by Partial Least Squares (PLS) can help in determining those factors. Thus, this study aimed to determine, through PLS regression analysis, the sensory attributes considered drivers of liking in probiotic petit suisse cheeses, strawberry flavor, sweetened with different sweeteners. Five samples in same equivalent sweetness: PROB1 (Sucralose 0.0243%), PROB2 (Stevia 0.1520%), PROB3 (Aspartame 0.0877%), PROB4 (Neotame 0.0025%) and PROB5 (Sucrose 15.2%) determined by just-about-right and magnitude estimation methods, and three commercial samples COM1, COM2 and COM3, were studied. Analysis was done over data coming from QDA, performed by 12 expert (highly trained assessors) on 20 descriptor terms, correlated with data from assessment of overall liking in acceptance test, carried out by 125 consumers, on all samples. Sequentially, results were submitted to PLS regression using XLSTAT software from Byossistemes. As shown in results, it was possible determine, that three sensory descriptor terms might be considered drivers of liking of probiotic petit suisse cheese samples added with sweeteners (p<0.05). The milk flavor was noticed as a sensory characteristic with positive impact on acceptance, while descriptors bitter taste and sweet aftertaste were perceived as descriptor terms with negative impact on acceptance of petit suisse probiotic cheeses. It was possible conclude that PLS regression analysis is a practical and useful tool in determining drivers of liking of probiotic petit suisse cheeses sweetened with artificial and natural sweeteners, allowing food industry to understand and improve their formulations maximizing the acceptability of their products.

Keywords: acceptance, consumer, quantitative descriptive analysis, sweetener

Procedia PDF Downloads 414
408 Factor Associated with Uncertainty Undergoing Hematopoietic Stem Cell Transplantation

Authors: Sandra Adarve, Jhon Osorio

Abstract:

Uncertainty has been studied in patients with different types of cancer, except in patients with hematologic cancer and undergoing transplantation. The purpose of this study was to identify factors associated with uncertainty in adults patients with malignant hemato-oncology diseases who are scheduled to undergo hematopoietic stem cell transplantation based on Merle Mishel´s Uncertainty theory. This was a cross-sectional study with an analytical purpose. The study sample included 50 patients with leukemia, myeloma, and lymphoma selected by non-probability sampling by convenience and intention. Sociodemographic and clinical variables were measured. Mishel´s Scale of Uncertainty in Illness was used for the measurement of uncertainty. A bivariate and multivariate analyses were performed to explore the relationships and associations between the different variables and uncertainty level. For this analysis, the distribution of the uncertainty scale values was evaluated through the Shapiro-Wilk normality test to identify statistical tests to be used. A multivariate analysis was conducted through a logistic regression using step-by-step technique. Patients were 18-74 years old, with a mean age of 44.8. Over time, the disease course had a median of 9.5 months, an opportunity was found in the performance of the transplantation of < 20 days for 50% of the patients. Regarding the uncertainty scale, a mean score of 95.46 was identified. When the dimensions of the scale were analyzed, the mean score of the framework of stimuli was 25.6, of cognitive ability was 47.4 and structure providers was 22.8. Age was identified to correlate with the total uncertainty score (p=0.012). Additionally, a statistically significant difference was evidenced between different religious creeds and uncertainty score (p=0.023), education level (p=0.012), family history of cancer (p=0.001), the presence of comorbidities (p=0.023) and previous radiotherapy treatment (p=0.022). After performing logistic regression, previous radiotherapy treatment (OR=0.04 IC95% (0.004-0.48)) and family history of cancer (OR=30.7 IC95% (2.7-349)) were found to be factors associated with the high level of uncertainty. Uncertainty is present in high levels in patients who are going to be subjected to bone marrow transplantation, and it is the responsibility of the nurse to assess the levels of uncertainty and the presence of factors that may contribute to their presence. Once it has been valued, the uncertainty must be intervened from the identified associated factors, especially all those that have to do with the cognitive capacity. This implies the implementation and design of intervention strategies to improve the knowledge related to the disease and the therapeutic procedures to which the patients will be subjected. All interventions should favor the adaptation of these patients to their current experience and contribute to seeing uncertainty as an opportunity for growth and transcendence.

Keywords: hematopoietic stem cell transplantation, hematologic diseases, nursing, uncertainty

Procedia PDF Downloads 119
407 Fear of Falling and Physical Activities: A Comparison Between Rural and Urban Elderly People

Authors: Farhad Azadi, Mohammad Mahdi Mohammadi, Mohsen Vahedi, Zahra Mahdiin

Abstract:

Context: The aging population is growing all over the world and maintaining physical activity is essential for healthy aging. However, fear of falling is a major obstacle to physical activity among the elderly. The aim of this study is to investigate and compare the relationship between fear of falling and physical activity in Iranian urban and rural elderly. Research Aim: The main aim of this cross-sectional analytical study is to investigate and compare the relationship between fear of falling and physical activity in Iranian rural and urban elderly. Methodology: The study used simple non-probability sampling to select 350 participants aged 60 years and older from rural and urban areas of Konarak, Sistan and Baluchistan provinces in Iran. The Persian versions of the Falls Efficacy Scale - International, Rapid Physical Activity Assessment, Activities of Daily Living, and Instrumental Activities of Daily Living questionnaires were used to assess fear of falling and physical activity. The data were analyzed using Pearson correlation tests. Findings: The study found a statistically significant negative correlation between fear of falling and physical activity, as measured by ADL, IADL, and RAPA1(aerobic ), in all elderly and rural and urban elderly (p<0.001). Fear of falling was higher in rural areas, while physical activity levels measured by ADL and RAPA1 were higher in urban areas. No significant difference was found between the two groups in IADL and RAPA2 (strength and flexibility) scores. Theoretical Importance: This study highlights the importance of considering the fear of falling as a significant obstacle to proper physical activity, especially among the elderly living in rural areas. Furthermore, the study provides insight into the difference between rural and urban elderly people in terms of fear of falling and physical activity. Data Collection and Analysis Procedures: Data was collected through questionnaires and analyzed using Pearson correlation tests. Questions Addressed: The study attempted to answer the following questions: Is there a relationship between fear of falling and physical activity in Iranian urban and rural elderly people? Is there a difference in fear of falling and physical activity between rural and urban elderly? Conclusion: Fear of falling is a major obstacle to physical activity among the elderly, especially in rural areas. The study found a significant negative correlation between fear of falling and physical activity in all elderly and rural and urban elderly. In addition, urban and rural elderly have differences in aerobic activity levels, but they do not differ in terms of flexibility and strength. Therefore, proper interventions are required to ensure that the elderly can maintain physical activity, especially in rural and deprived areas.

Keywords: aged, fear of falling, physical activity, urban population, rural population

Procedia PDF Downloads 42
406 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change

Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems

Procedia PDF Downloads 35
405 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 130
404 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 113
403 In vivo Alterations in Ruminal Parameters by Megasphaera Elsdenii Inoculation on Subacute Ruminal Acidosis (SARA)

Authors: M. S. Alatas, H. D. Umucalilar

Abstract:

SARA is a common and serious metabolic disorder in early lactation in dairy cattle and in finishing beef cattle, caused by diets with high inclusion of cereal grain. This experiment was performed to determine the efficacy of Megasphaera elsdenii, a major lactate-utilizing bacterium in prevention/treatment of SARA in vivo. In vivo experimentation, it was used eight ruminally cannulated rams and it was applied the rapid adaptation with the mixture of grain based on wheat (%80 wheat, %20 barley) and barley (%80 barley, %20 wheat). During the systematic adaptation, it was followed the probability of SARA formation by being measured the rumen pH with two hours intervals after and before feeding. After being evaluated the data, it was determined the ruminal pH ranged from 5,2-5,6 on the condition of feeding with 60 percentage of grain mixture based on barley and wheat, that assured the definite form of subacute acidosis. In four days SARA period, M. elsdenii (1010 cfu ml-1) was inoculated during the first two days. During the SARA period, it was observed the decrease of feed intake with M. elsdenii inoculation. Inoculation of M. elsdenii was caused to differentiation of rumen pH (P < 0,0001), while it was found the pH level approximately 5,55 in animals applied the inoculation, it was 5,63 pH in other animals. It was observed that total VFA with the bacterium inoculation tended to change in terms of grain feed (P < 0,07). It increased with the effect of total VFA inoculation in barley based diet, but it was more stabilized in wheat based diet. Bacterium inoculation increased the ratio of propionic acid (18,33%-21,38%) but it caused to decrease the butyric acid, and acetic/propionic acid. During the rapid adaptation, the concentration of lactic acid in the rumen liquid increased depending upon grain level (P<0,0001). On the other hand bacterium inoculation did not have an effect on concentration of lactic acid. M. elsdenii inoculation did not affect ruminal ammonia concentration. In the group that did not apply inoculation, the level of ruminal ammonia concentration was higher than the others applied inoculation. M. elsdenii inoculation did not changed protozoa count in barley-based diet whereas it decreased in wheat-based diet. In the period of SARA, it was observed that the level of blood glucose, lactate and hematocrit increased greatly after inoculation (P < 0,0001). When it is generally evaluated, it is seen that M. elsdenii inoculation has not a positive impact on rumen parameters. Therefore, to reveal the full impact of the inoculation with different strains, feedstuffs and animal groups, further research is required.

Keywords: In vivo, Subactute ruminal acidosis, Megasphaera elsdenii, Rumen fermentation

Procedia PDF Downloads 611
402 Historic Fire Occurrence in Hemi-Boreal Forests: Exploring Natural and Cultural Scots Pine Multi-Cohort Fire Regimes in Lithuania

Authors: Charles Ruffner, Michael Manton, Gintautas Kibirkstis, Gediminas Brazaitas, Vitas Marozas, Ekaterine Makrickiene, Rutile Pukiene, Per Angelstam

Abstract:

In dynamic boreal forests, fire is an important natural disturbance, which drives regeneration and mortality of living and dead trees, and thus successional trajectories. However, current forest management practices focusing on wood production only have effectively eliminated fire as a stand-level disturbance. While this is generally well studied across much of Europe, in Lithuania, little is known about the historic fire regime and the role fire plays as a management tool towards the sustainable management of future landscapes. Focusing on Scots pine forests, we explore; i) the relevance of fire disturbance regimes on forestlands of Lithuania; ii) fire occurrence in the Dzukija landscape for dry upland and peatland forest sites, and iii) correlate tree-ring data with climate variables to ascertain climatic influences on growth and fire occurrence. We sampled and cross-dated 132 Scots pine samples with fire scars from 4 dry pine forest stands and 4 peatland forest stands, respectively. The fire history of each sample was analyzed using standard dendrochronological methods and presented in FHAES format. Analyses of soil moisture and nutrient conditions revealed a strong probability of finding forests that have a high fire frequency in Scots pine forests (59%), which cover 34.5% of Lithuania’s current forestland. The fire history analysis revealed 455 fire scars and 213 fire events during the period 1742-2019. Within the Dzukija landscape, the mean fire interval was 4.3 years for the dry Scots pine forest and 8.7 years for the peatland Scots pine forest. However, our comparison of fire frequency before and after 1950 shows a marked decrease in mean fire interval. Our data suggest that hemi-boreal forest landscapes of Lithuania provide strong evidence that fire, both human and lightning-ignited fires, has been and should be a natural phenomenon and that the examination of biological archives can be used to guide sustainable forest management into the future. Currently, fire use is prohibited by law as a tool for forest management in Lithuania. We recommend introducing trials that use low-intensity prescribed burning of Scots pine stands as a regeneration tool towards mimicking natural forest disturbance regimes.

Keywords: biodiversity conservation, cultural burning, dendrochronology, forest dynamics, forest management, succession

Procedia PDF Downloads 176
401 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 195
400 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 103
399 Experimental Study of Sand-Silt Mixtures with Torsional and Flexural Resonant Column Tests

Authors: Meghdad Payan, Kostas Senetakis, Arman Khoshghalb, Nasser Khalili

Abstract:

Dynamic properties of soils, especially at the range of very small strains, are of particular interest in geotechnical engineering practice for characterization of the behavior of geo-structures subjected to a variety of stress states. This study reports on the small-strain dynamic properties of sand-silt mixtures with particular emphasis on the effect of non-plastic fines content on the small strain shear modulus (Gmax), Young’s Modulus (Emax), material damping (Ds,min) and Poisson’s Ratio (v). Several clean sands with a wide range of grain size characteristics and particle shape are mixed with variable percentages of a silica non-plastic silt as fines content. Prepared specimens of sand-silt mixtures at different initial void ratios are subjected to sequential torsional and flexural resonant column tests with elastic dynamic properties measured along an isotropic stress path up to 800 kPa. It is shown that while at low percentages of fines content, there is a significant difference between the dynamic properties of the various samples due to the different characteristics of the sand portion of the mixtures, this variance diminishes as the fines content increases and the soil behavior becomes mainly silt-dominant, rendering no significant influence of sand properties on the elastic dynamic parameters. Indeed, beyond a specific portion of fines content, around 20% to 30% typically denoted as threshold fines content, silt is controlling the behavior of the mixture. Using the experimental results, new expressions for the prediction of small-strain dynamic properties of sand-silt mixtures are developed accounting for the percentage of silt and the characteristics of the sand portion. These expressions are general in nature and are capable of evaluating the elastic dynamic properties of sand-silt mixtures with any types of parent sand in the whole range of silt percentage. The inefficiency of skeleton void ratio concept in the estimation of small-strain stiffness of sand-silt mixtures is also illustrated.

Keywords: damping ratio, Poisson’s ratio, resonant column, sand-silt mixture, shear modulus, Young’s modulus

Procedia PDF Downloads 228
398 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 126
397 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits

Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi

Abstract:

The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.

Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics

Procedia PDF Downloads 267
396 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study

Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell

Abstract:

Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.

Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout

Procedia PDF Downloads 159
395 International Entrepreneurial Orientation and Institutionalism: The Effect on International Performance for Latin American SMEs

Authors: William Castillo, Hugo Viza, Arturo Vargas

Abstract:

The Pacific Alliance is a trade bloc that is composed of four emerging economies: Chile, Colombia, Peru, and Mexico. These economies have gained macroeconomic stability in the past decade and as a consequence present future economic progress. Under this positive scenario, international business firms have flourished. However, the literature in this region has been widely unexamined. Therefore, it is critical to fill this theoretical gap, especially considering that Latin America is starting to become a global player and it possesses a different institutional context than developed markets. This paper analyzes the effect of international entrepreneurial orientation and institutionalism on international performance, for the Pacific Alliance small-to-medium enterprises (SMEs). The literature considers international entrepreneurial orientation to be a powerful managerial capability – along the resource based view- that firms can leverage to obtain a satisfactory international performance. Thereby, obtaining a competitive advantage through the correct allocation of key resources to exploit the capabilities here involved. Entrepreneurial Orientation is defined around five factors: innovation, proactiveness, risk-taking, competitive aggressiveness, and autonomy. Nevertheless, the institutional environment – both local and foreign, adversely affects International Performance; this is especially the case for emerging markets with uncertain scenarios. In this way, the study analyzes an Entrepreneurial Orientation, key endogenous variable of international performance, and Institutionalism, an exogenous variable. The survey data consists of Pacific Alliance SMEs that have foreign operations in at least another country in the trade bloc. Findings are still in an ongoing research process. Later, the study will undertake a structural equation modeling (SEM) using the variance-based partial least square estimation procedure. The software that is going to be used is the SmartPLS. This research contributes to the theoretical discussion of a largely postponed topic: SMEs in Latin America, that has had limited academic research. Also, it has practical implication for decision-makers and policy-makers, providing insights into what is behind international performance.

Keywords: institutional theory, international entrepreneurial orientation, international performance, SMEs, Pacific Alliance

Procedia PDF Downloads 224
394 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 350
393 Serum Vitamin D and Carboxy-Terminal TelopeptideType I Collagen Levels: As Markers for Bone Health Affection in Patients Treated with Different Antiepileptic Drugs

Authors: Moetazza M. Al-Shafei, Hala Abdel Karim, Eitedal M. Daoud, Hassan Zaki Hassuna

Abstract:

Epilepsy is a common neurological disorder affecting all age groups. It is one of the world's most prevalent non-communicable diseases. Increased evidence suggesting that long term usage of anti-epileptic drugs can have adverse effects on bone mineralization and bone molding .Aiming to study these effects and to give guide lines to support bone health through early intervention. From Neurology Out-Patient Clinic kaser Elaini University Hospital, 60 Patients were enrolled, 40 patients on antiepileptic drugs for at least two years and 20 controls matched with age and sex, epileptic but before starting treatment both chosen under specific criteria. Patients were divided into four groups, three groups with monotherapy treated with either Phynetoin, Valporic acid or Carbamazipine and fourth group treated with both Valporic acid and Carbamazipine. Estimation of serum Carboxy-Terminal Telopeptide of Type I- Collagen(ICTP) bone resorption marker, serum 25(OH )vit D3, calcium ,magnesium and phosphorus were done .Results showed that all patients on AED had significant low levels of 25(OH) vit D3 (p<0.001) ,with significant elevation of ICTP (P<0.05) versus controls. In group treated with Phynotoin highly significant elevation of (ICTP) marker and decrease of both serum 25(OH) vit D3 (P<0, 0001) and serum calcium(P<0.05)versus control. Double drug group showed significant decrease of serum 25(OH) vit D3 (P<0.0001) and decrease in Phosphorus (P<0.05) versus controls. Serum magnesium showed no significant differences between studied groups. We concluded that Anti- epileptic drugs appears to be an aggravating factor on bone mineralization ,so therapeutically it can be worth wile to supplement calcium and vitamin D even before initiation of antiepileptic therapy. ICTP marker can be used to evaluate change in bone resorption before and during AED therapy.

Keywords: antiepileptic drugs, bone minerals, carboxy teminal telopeptidetype-1-collagen bone resorption marker, vitamin D

Procedia PDF Downloads 470
392 The Impact of Intimate Partner Violence on Women’s Mental Health in Kenya

Authors: Josephine Muchiri, Makena Muriithi

Abstract:

Adverse mental health consequences are experienced by those that have been touched by Intimate Partner Violence (IPV), whether directly or indirectly. These negative effects are felt not only in the short term but in years to come. It is important to examine the prevalence and co-occurrence of mental disorders in order to provide strategic interventions for women who have experienced IPV. The aim of this study was to examine the prevalence and comorbidity of post-traumatic stress disorder (PTSD), Depression, and Anxiety among women who had experienced intimate Partner violence in two selected informal settlements in Nairobi County, Kenya. Participants were 116 women (15-60 years) selected through purposive and snowball sampling from the low social, economic settlements (Kawangware and Kibera) in Nairobi, Kenya. A social demographic questionnaire and the Woman Abuse Screening Tool (WAST) were used to collect data on intimate partner violence experiences. The PTSD Checklist for DSM-5 (PCL-5), Beck’s Depression Inventory, and the Beck’s Anxiety Inventory assessed for post-traumatic stress disorder, depression, and anxiety, respectively. Data analysis was conducted using the Statistical Package for Social Sciences (SPSS) version 29, utilizing descriptive and correlation analyses. Findings indicated that the women had undergone various forms of abuse from their intimate partners, which were physical abuse 111(92.5%), sexual abuse 70(88.6%), and verbal abuse 92(93.9%). The prevalence of the mental disorders was PTSD 47(32.4%); M= 44.11, S.D =14.67, depression was the highest at n=131(90.3%; M=33.37±9.98) with the levels of depression having varying prevalence rates where severe depression had the highest representation [moderate: n= 35; 24.1%, severe: n=69 (47.6%) and extremely severe: n=27(18.6%)]. Anxiety had the second highest prevalence of n=99 (68.8%; M= 28.55±13.63) with differing prevalence rates in the levels of anxiety which were normal anxiety: 45(31.3%), moderate anxiety n=62(43.1%) and severe anxiety: n=37(25.7%). Regarding comorbidities, the Pearson correlation test showed that there was a significant (p=0.000) positive relationship between PTSD and depression (r=0.379; p=.000), PTSD and anxiety (r=0.624; p=.000), and depression and anxiety (r=0.386; p=.000) such that increase in one disorder concomitantly led to increase of the other two disorders; hence comorbidity of the three disorders was ascertained. Conclusion: The study asserted the adverse impacts of IPV on women’s mental well-being, where the prevalence of PTSD, depression, and anxiety was established. Almost all the women had depressive symptoms; whereas more than half had anxiety and slightly more than a third had PTSD. Regarding the severity levels of anxiety and depression, almost half of the women with depression had severe depression whereas moderate anxiety was more prevalent for those with anxiety. The three disorders were found to co-occur where comorbidities of PTSD and anxiety had the highest probability of co-occurrence. It is thus recommended that mental health interventions with a focus on the three disorders be offered for women undergoing IPV.

Keywords: anxiety, comorbidity, depression, intimate partner violence, post-traumatic stress disorder

Procedia PDF Downloads 46
391 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 409
390 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 248