Search results for: finite difference simulation
401 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs
Authors: John Farr Rothrock
Abstract:
Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.Keywords: migraine, female, libido, sexual activity, phenotype
Procedia PDF Downloads 77400 Sugar-Induced Stabilization Effect of Protein Structure
Authors: Mitsuhiro Hirai, Satoshi Ajito, Nobutaka Shimizu, Noriyuki Igarashi, Hiroki Iwase, Shinichi Takata
Abstract:
Sugars and polyols are known to be bioprotectants preventing such as protein denaturation and enzyme deactivation and widely used as a nontoxic additive in various industrial and medical products. The mechanism of their protective actions has been explained by specific bindings between biological components and additives, changes in solvent viscosities, and surface tension and free energy changes upon transfer of those components into additive solutions. On the other hand, some organisms having tolerances against extreme environment produce stress proteins and/or accumulate sugars in cells, which is called cryptobiosis. In particular, trehalose has been drawing attention relevant to cryptobiosis under external stress such as high or low temperature, drying, osmotic pressure, and so on. The function of cryptobiosis by trehalose has been explained relevant to the restriction of the intra-and/or-inter-molecular movement by vitrification or from the replacement of water molecule by trehalose. Previous results suggest that the structure and interaction between sugar and water are a key determinant for understanding cryptobiosis. Recently, we have shown direct evidence that the protein hydration (solvation) and structural stability against chemical and thermal denaturation significantly depend on sugar species and glycerol. Sugar and glycerol molecules tend to be preferentially or weakly excluded from the protein surface and preserved the native protein hydration shell. Due to the protective action of the protein hydration shell by those molecules, the protein structure is stabilized against chemical (guanidinium chloride) and thermal denaturation. The protective action depends on sugar species. To understand the above trend and difference in detail, it is essentially important to clarify the characteristics of solutions containing those additives. In this study, by using wide-angle X-ray scattering technique covering a wide spatial region (~3-120 Å), we have clarified structures of sugar solutions with the concentration from 5% w/w to 65% w/w. The sugars measured in the present study were monosaccharides (glucose, fructose, mannose) and disaccharides (sucrose, trehalose, maltose). Due to observed scattering data with a wide spatial resolution, we have succeeded in obtaining information on the internal structure of individual sugar molecules and on the correlation between them. Every sugar gradually shortened the average inter-molecular distance as the concentration increased. The inter-molecular interaction between sugar molecules was essentially showed an exclusive tendency for every sugar, which appeared as the presence of a repulsive correlation hole. This trend was more weakly seen for trehalose compared to other sugars. The intermolecular distance and spread of individual molecule clearly showed the dependence of sugar species. We will discuss the relation between the characteristic of sugar solution and its protective action of biological materials.Keywords: hydration, protein, sugar, X-ray scattering
Procedia PDF Downloads 156399 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear
Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho
Abstract:
The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.Keywords: prestressed hollow core slabs, shear, strut, tie models
Procedia PDF Downloads 333398 Collaborative Program Student Community Service as a New Approach for Development in Rural Area in Case of Western Java
Authors: Brian Yulianto, Syachrial, Saeful Aziz, Anggita Clara Shinta
Abstract:
Indonesia, with a population of about two hundred and fifty million people in quantity, indicates the outstanding wealth of human resources. Hundreds of millions of the population scattered in various communities in various regions in Indonesia with the different characteristics of economic, social and unique culture. Broadly speaking, the community in Indonesia is divided into two classes, namely urban communities and rural communities. The rural communities characterized by low potential and management of natural and human resources, limited access of development, and lack of social and economic infrastructure, and scattered and isolated population. West Java is one of the provinces with the largest population in Indonesia. Based on data from the Central Bureau of Statistics in 2015 the number of population in West Java reached 46.7096 million souls spread over 18 districts and 9 cities. The big difference in geographical and social conditions of people in West Java from one region to another, especially the south to the north causing the gap is high. It is closely related to the flow of investment to promote the area. Poverty and underdevelopment are the classic problems that occur on a massive scale in the region as the effects of inequity in development. South Cianjur and Tasikmalaya area South became one of the portraits area where the existing potential has not been capable of prospering society. Tri Dharma College not only define the College as a pioneer implementation of education and research to improve the quality of human resources but also demanded to be a pioneer in the development through the concept of public service. Bandung Institute of Technology as one of the institutions of higher education to implement community service system through collaborative community work program "one of the university community" as one approach to developing villages. The program is based Community Service, where students are not only required to be able to take part in community service, but also able to develop a community development strategy that is comprehensive and integrity in cooperation with government agencies and non-government related as a real form of effort alignment potential, position and role from various parties. Areas of western Java in particular have high poverty rates and disparity. On the other hand, there are three fundamental pillars in the development of rural communities, namely economic development, community development, and the integrated infrastructure development. These pillars require the commitment of all components of community, including the students and colleges for upholding success. College’s community program is one of the approaches in the development of rural communities. ITB is committed to implement as one form of student community service as community-college programs that integrate all elements of the community which is called Kuliah Kerja Nyata-Thematic.Keywords: development in rural area, collaborative, student community service, Kuliah Kerja Nyata-Thematic ITB
Procedia PDF Downloads 222397 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 322396 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric
Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff
Abstract:
When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.Keywords: coper, GISAXS, nanoparticles, plasmonics
Procedia PDF Downloads 123395 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities
Authors: Riofrio Jonathan, Fernandez Guillermo
Abstract:
Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility
Procedia PDF Downloads 151394 Concentrations of Leptin, C-Peptide and Insulin in Cord Blood as Fetal Origins of Insulin Resistance and Their Effect on the Birth Weight of the Newborn
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
Obesity is associated with an increased risk of developing insulin resistance. Insulin resistance often progresses to type-2 diabetes mellitus and is linked to a wide variety of other pathophysiological features including hypertension, hyperlipidemia, atherosclerosis (metabolic syndrome) and polycystic ovarian syndrome. Macrosomia is common in infants born to not only women with gestational diabetes mellitus but also non-diabetic obese women. During the past two decades, obesity in children and adolescents has risen significantly in Asian populations including Sri Lanka. There is increasing evidence to believe that infants who are born large for gestational age (LGA) are more likely to be obese in childhood. It is also established from previous studies that Asian populations have higher percentage body fat at a lower body mass index compared to Caucasians. High leptin levels in cord blood have been reported to correlate with fetal adiposity at birth. Previous studies have also shown that cord blood C-peptide and insulin levels are significantly and positively correlated with birth weight. Therefore, the objective of this preliminary study was to determine the relationship between parameters of fetal insulin resistance such as leptin, C-peptide and insulin and the birth weight of the newborn in a study population in Southern Sri Lanka. Umbilical cord blood was collected from 90 newborns and the concentration of insulin, leptin, and C-peptide were measured by ELISA technique. Birth weight, length, occipital frontal, chest, hip and calf circumferences of newborns were measured and characteristics of the mother such as age, height, weight before pregnancy and weight gain were collected. The relationship between insulin, leptin, C-peptide, and anthropometrics were assessed by Pearson’s correlation while the Mann-Whitney U test was used to assess the differences in cord blood leptin, C-peptide, and insulin levels between groups. A significant difference (p < 0.001) was observed between the insulin levels of infants born LGA (18.73 ± 0.64 µlU/ml) and AGA (13.08 ± 0.43 µlU/ml). Consistently, A significant increase in concentration (p < 0.001) was observed in C-peptide levels of infants born LGA (9.32 ± 0.77 ng/ml) compared to AGA (5.44 ± 0.19 ng/ml). Cord blood leptin concentration of LGA infants (12.67 ng/mL ± 1.62) was significantly higher (p < 0.001) compared to the AGA infants (7.10 ng/mL ± 0.97). Significant positive correlations (p < 0.05) were observed among cord leptin levels and the birth weight, pre-pregnancy maternal weight and BMI between the infants of AGA and LGA. Consistently, a significant positive correlation (p < 0.05) was observed between the birth weight and the C peptide concentration. Significantly high concentrations of leptin, C-peptide and insulin levels in the cord blood of LGA infants suggest that they may be involved in regulating fetal growth. Although previous studies suggest comparatively high levels of body fat in the Asian population, values obtained in this study are not significantly different from values previously reported from Caucasian populations. According to this preliminary study, maternal pre-pregnancy BMI and weight may contribute as significant indicators of cord blood parameters of insulin resistance and possibly the birth weight of the newborn.Keywords: large for gestational age, leptin, C-peptide, insulin
Procedia PDF Downloads 157393 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 90392 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences
Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur
Abstract:
In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.Keywords: decision making, endowment effect, pricing, loss aversion, loss attention
Procedia PDF Downloads 345391 Mood Symptom Severity in Service Members with Posttraumatic Stress Symptoms after Service Dog Training
Authors: Tiffany Riggleman, Andrea Schultheis, Kalyn Jannace, Jerika Taylor, Michelle Nordstrom, Paul F. Pasquina
Abstract:
Introduction: Posttraumatic Stress (PTS) and Posttraumatic Stress Disorder (PTSD) remain significant problems for military and veteran communities. Symptoms of PTSD often include poor sleep, intrusive thoughts, difficulty concentrating, and trouble with emotional regulation. Unfortunately, despite its high prevalence, service members diagnosed with PTSD often do not seek help, usually because of the perceived stigma surrounding behavioral health care. To help address these challenges, non-pharmacological, therapeutic approaches are being developed to help improve care and enhance compliance. The Service Dog Training Program (SDTP), which involves teaching patients how to train puppies to become mobility service dogs, has been successfully implemented into PTS/PTSD care programs with anecdotal reports of improved outcomes. This study was designed to assess the biopsychosocial effects of SDTP from military beneficiaries with PTS symptoms. Methods: Individuals between the ages of 18 and 65 with PTS symptom were recruited to participate in this prospective study. Each subject completes 4 weeks of baseline testing, followed by 6 weeks of active service dog training (twice per week for one hour sessions) with a professional service dog trainer. Outcome measures included the Posttraumatic Stress Checklist for the DSM-5 (PCL-5), Generalized Anxiety Disorder questionnaire-7 (GAD-7), Patient Health Questionnaire-9 (PHQ-9), social support/interaction, anthropometrics, blood/serum biomarkers, and qualitative interviews. Preliminary analysis of 17 participants examined mean scores on the GAD-7, PCL-5, and PHQ-9, pre- and post-SDTP, and changes were assessed using Wilcoxon Signed-Rank tests. Results: Post-SDTP, there was a statistically significant mean decrease in PCL-5 scores of 13.5 on an 80-point scale (p=0.03) and a significant mean decrease of 2.2 in PHQ-9 scores on a 27 point scale (p=0.04), suggestive of decreased PTSD and depression symptoms. While there was a decrease in mean GAD-7 scores post-SDTP, the difference was not significant (p=0.20). Recurring themes among results from the qualitative interviews include decreased pain, forgetting about stressors, improved sense of calm, increased confidence, improved communication, and establishing a connection with the service dog. Conclusion: Preliminary results of the first 17 participants in this study suggest that individuals who received SDTP had a statistically significant decrease in PTS symptom, as measured by the PCL-5 and PHQ-9. This ongoing study seeks to enroll a total of 156 military beneficiaries with PTS symptoms. Future analyses will include additional psychological outcomes, pain scores, blood/serum biomarkers, and other measures of the social aspects of PTSD, such as relationship satisfaction and sleep hygiene.Keywords: post-concussive syndrome, posttraumatic stress, service dog, service dog training program, traumatic brain injury
Procedia PDF Downloads 113390 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT
Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi
Abstract:
Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer
Procedia PDF Downloads 84389 Smallholder’s Agricultural Water Management Technology Adoption, Adoption Intensity and Their Determinants: The Case of Meda Welabu Woreda, Oromia, Ethiopia
Authors: Naod Mekonnen Anega
Abstract:
The very objective of this paper was to empirically identify technology tailored determinants to the adoption and adoption intensity (extent of use) of agricultural water management technologies in Meda Welabu Woreda, Oromia regional state, Ethiopia. Meda Welabu Woreda which is one of the administrative Woredas of the Oromia regional state was selected purposively as this Woreda is one of the Woredas in the region where small scale irrigation practices and the use of agricultural water management technologies can be found among smallholders. Using the existence water management practices (use of water management technologies) and land use pattern as a criterion Genale Mekchira Kebele is selected to undergo the study. A total of 200 smallholders were selected from the Kebele using the technique developed by Krejeie and Morgan. The study employed the Logit and Tobit models to estimate and identify the economic, social, geographical, household, institutional, psychological, technological factors that determine adoption and adoption intensity of water management technologies. The study revealed that while 55 of the sampled households are adopters of agricultural water management technology the rest 140 were non adopters of the technologies. Among the adopters included in the sample 97% are using river diversion technology (traditional) with traditional canal while the rest 7% percent are using pond with treadle pump technology. The Logit estimation reveled that while adoption of river diversion is positively and significantly affected by membership to local institutions, active labor force, income, access to credit and land ownership, adoption of treadle pump technology is positively and significantly affected by family size, education level, access to credit, extension contact, income, access to market, and slope. The Logit estimation also revealed that whereas, group action requirement, distance to farm, and size of active labor force negative and significantly influenced adoption of river diversion, age and perception has negatively and significantly influenced adoption decision of treadle pump technology. On the other hand, the Tobit estimation reveled that while adoption intensity (extent of use) of agricultural water management is positively and significantly affected by education, credit, and extension contact, access to credit, access to market and income. This study revealed that technology tailored study on adoption of Agricultural water management technologies (AWMTs) should be considered to indentify and scale up best agricultural water management practices. In fact, in countries like Ethiopia, where there is difference in social, economic, cultural, environmental and agro ecological conditions even within the same Kebele technology tailored study that fit the condition of each Kebele would help to identify and scale up best practices in agricultural water management.Keywords: water management technology, adoption, adoption intensity, smallholders, technology tailored approach
Procedia PDF Downloads 454388 Common Misconceptions around Human Immunodeficiency Virus in Rural Uganda: Establishing the Role for Patient Education Leaflets Using Patient and Staff Surveys
Authors: Sara Qandil, Harriet Bothwell, Lowri Evans, Kevin Jones, Simon Collin
Abstract:
Background: Uganda suffers from high rates of HIV. Misconceptions around HIV are known to be prevalent in Sub-Saharan Africa (SSA). Two of the most common misconceptions in Uganda are that HIV can be transmitted by mosquito bites or from sharing food. The aim of this project was to establish the local misconceptions around HIV in a Central Ugandan population, and identify if there is a role for patient education leaflets. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: The study was conducted at Villa Maria Hospital; a private, rural hospital in Kalungu District, Central Uganda. 36 patients, 23 from the hospital clinic and 13 from the community were interviewed regarding their understanding of HIV and by what channels they had obtained this understanding. Interviews were conducted using local student nurses as translators. Verbal responses were translated and then transcribed by the researcher. The same 36 patients then undertook a 'misconception' test consisting of 35 questions. Quantitative data was analysed using descriptive statistics and results were scored based on three components of 'transmission knowledge', 'prevention knowledge' and 'misconception rejection'. Each correct response to a question was scored one point, otherwise zero e.g. correctly rejecting a misconception scored one point, but answering ‘yes’ or ‘don’t know’ scored zero. Scores ≤ 27 (the average score) were classified as having ‘poor understanding’. Mean scores were compared between participants seen at the HIV clinic and in the community, and p-values (including Fisher’s exact test) were calculated using Stata 2015. Level of significance was set at 0.05. Interviews with 7 members of staff working in the HIV clinic were undertaken to establish what methods of communication are used to educate patients. Interviews were transcribed and thematic analysis undertaken. Results: The commonest misconceptions which failed to be rejected included transmission of HIV by kissing (78%), mosquitoes (69%) and touching (36%). 33% believed HIV may be prevented by praying. The overall mean scores for transmission knowledge (87.5%) and prevention knowledge (81.1%) were better than misconception rejection scores (69.3%). HIV clinic respondents did tend to have higher scores, i.e. fewer misconceptions, although there was statistical evidence of a significant difference only for prevention knowledge (p=0.03). Analysis of the qualitative data is ongoing but several patients expressed concerns about not being able to read and therefore leaflets not having a helpful role. Conclusions: Results from this paper identified that a high proportion of the population studied held misconceptions about HIV. Qualitative data suggests that there may be a role for patient education leaflets, if pictorial-based and suitable for those with low literacy skill.Keywords: HIV, human immunodeficiency virus, misconceptions, patient education, Sub-Saharan Africa, Uganda
Procedia PDF Downloads 259387 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence
Authors: Yohanna Panshak
Abstract:
The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.Keywords: oil-price, volatility, prosperity, budget, expenditure
Procedia PDF Downloads 270386 Comparative Analysis of Mechanical Properties of Paddy Rice for Different Variety-Moisture Content Interactions
Authors: Johnson Opoku-Asante, Emmanuel Bobobee, Joseph Akowuah, Eric Amoah Asante
Abstract:
In recent years, the issue of postharvest losses has become a serious concern in Sub-Saharan Africa. Postharvest technology development and adaptation need urgent attention, particularly for small and medium-scale rice farmers in Africa. However, to better develop any postharvest technology, knowledge of the mechanical properties of different varieties of paddy rice is vital. There is also the issue of the development of new rice cultivars. The objectives of this research are to (1) determine the mechanical properties of the selected paddy rice varieties at varying moisture content. (2) conduct a comparative analysis of the mechanical properties of selected rice paddy for different variety-moisture content interactions. (3) determine the significant statistical differences between the mean values of the various variety-moisture content interactions The mechanical properties of AGRA rice, CRI-Amankwatia, CRI-Enapa and CRI-Dartey, four local varieties developed by Crop Research Institute of Ghana are compared at 11.5%, 13.0% and 16.5% dry basis moisture content. The mechanical properties measured are Sphericity, Aspect ratio, Grain mass, 1000 Grain mass, Bulk Density, True Density, Porosity and Angle of Repose. Samples were collected from the Kwadaso Agric College of the CRI in Kumasi. The samples were threshed manually and winnowed before conducting the experiment. The moisture content was determined on a dry basis using the Moistex Screw-Type Digital Grain Moisture Meter. Other equipment used for data collection were venire calipers and Citizen electronic scale. A 4×3 factorial arrangement was used in a completely randomized design in three replications. Tukey's HSD comparisons test was conducted during data analysis to compare all possible pairwise combinations of the various varieties’ moisture content interaction. From the results, it was concluded that Sphericity recorded 0.391 mm³ to 0.377 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5%, respectively, whereas Aspect Ratio recorded 0.298 mm³ to 0.269 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5% respectively. For grain mass, AGRA rice at 13.0% also recorded 0.0312 g as the highest score and CRI-Enapa at 13.0% obtained 0.0237 as the lowest score. For the GM1000, it was observed that it ranges from 29.33 g for CRI-Amankwatia at 16.5% moisture content to 22.54 g for CRI-Enapa at 16.5% interactions. Bulk Density ranged from 654.0 kg/m³ to 422.9 kg/m³ for CRI-Amankwatia at 16.5% and CRI-Enapa at 11.5% as the highest and lowest recordings, respectively. It was also observed that the true Density ranges from 1685.8 kg/m3 for AGRA rice at 13.0% moisture content to 1352.5 kg/m³ for CRI-Enapa at 16.5% interactions. In the case of porosity, CRI-Enapa at 11.5% received the highest score of 70.83% and CRI-Amankwatia at 16.5 received the lowest score of 55.88%. Finally, in the case of Angle of Repose, CRI-Amankwatia at 16.5% recorded the highest score of 47.3o and CRI-Enapa at 11.5% recorded the least score of 34.27o. In all cases, the difference in mean value was less than the LSD. This indicates that there were no significant statistical differences between their mean values, indicating that technologies developed and adapted for one variety can equally be used for all the other varieties.Keywords: angle of repose, aspect ratio, bulk density, porosity, sphericity, mechanical properties
Procedia PDF Downloads 99385 Case Study of Migrants, Cultures and Environmental Crisis
Authors: Christina Y. P. Ting
Abstract:
Migration is a global phenomenon with movements of migrants from developed and developing countries to the host societies. Migrants have changed the host countries’ demography – its population structure and also its ethnic cultural diversity. Acculturation of migrants in terms of their adoption of the host culture is seen as important to ensure that they ‘fit into’ their adopted country so as to participate in everyday public life. However, this research found that the increase of the China-born migrants’ post-migration consumption level had impact on Australia’s environment reflected not only because of their adoption of elements of the host culture, but also retention of aspects of Chinese culture – indicating that the influence of bi-culturalism was in operation. This research, which was based on the face-to-face interview with 61 China-born migrants in the suburb of Box Hill, Melbourne, investigated the pattern of change in the migrants’ consumption upon their settlement in Australia. Using an ecological footprint calculator, their post-migration footprints were found to be larger than pre-migration footprint. The uniquely-derived CALD (Culturally and Linguistically Diverse) Index was used to measure individuals’ strength of connectedness to ethnic culture. Multi-variant analysis was carried out to understand which independent factors that influence consumption best explain the change in footprint (which is the difference between pre-and post-migration footprints, as a dependent factor). These independent factors ranged from socio-economic and demographics to the cultural context, that is, the CALD Index and indicators of acculturation. The major findings from the analysis were: Chinese culture (as measured by the CALD Index) and indicators of acculturation such as length of residency and using English in communications besides the traditional factors such as age, income and education level made significant contributions to the large increase in the China-born group’s post-migration consumption level. This paper as part of a larger study found that younger migrants’ large change in their footprint were related to high income and low level of education. This group of migrants also practiced bi-cultural consumption in retaining ethnic culture and adopting the host culture. These findings have importantly highlighted that for a host society to tackle environmental crisis, governments need not only to understand the relationship between age and consumption behaviour, but also to understand and embrace the migrants’ ethnic cultures, which may act as bridges and/or fences in relationships. In conclusion, for governments to deal with national issues such as environmental crisis within a cultural diverse population, it necessitates an understanding of age and aspects of ethnic culture that may act as bridges and fences. This understanding can aid in putting in place policies that enable the co-existence of a hybrid of the ethnic and host cultures in order to create and maintain a harmonious and secured living environment for population groups.Keywords: bicultural consumer, CALD index, consumption, ethnic culture, migrants
Procedia PDF Downloads 246384 System Analysis on Compact Heat Storage in the Built Environment
Authors: Wilko Planje, Remco Pollé, Frank van Buuren
Abstract:
An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.Keywords: compact thermal storage, thermochemical material, built environment, renewable energy
Procedia PDF Downloads 244383 Relationship between Gully Development and Characteristics of Drainage Area in Semi-Arid Region, NW Iran
Authors: Ali Reza Vaezi, Ouldouz Bakhshi Rad
Abstract:
Gully erosion is a widespread and often dramatic form of soil erosion caused by water during and immediately after heavy rainfall. It occurs when flowing surface water is channelled across unprotected land and washes away the soil along the drainage lines. The formation of gully is influenced by various factors, including climate, drainage surface area, slope gradient, vegetation cover, land use, and soil properties. It is a very important problem in semi-arid regions, where soils have lower organic matter and are weakly aggregated. Intensive agriculture and tillage along the slope can accelerate soil erosion by water in the region. There is little information on the development of gully erosion in agricultural rainfed areas. Therefore, this study was carried out to investigate the relationship between gully erosion and morphometric characteristics of the drainage area and the effects of soil properties and soil management factors (land use and tillage method) on gully development. A field study was done in a 900 km2 agricultural area in Hshtroud township located in the south of East Azarbijan province, NW Iran. Toward this, two hundred twenty-two gullies created in rainfed lands were found in the area. Some properties of gullies, consisting of length, width, depth, height difference, cross section area, and volume, were determined. Drainage areas for each or some gullies were determined, and their boundaries were drawn. Additionally, the surface area of each drainage, land use, tillage direction, and soil properties that may affect gully formation were determined. The soil erodibility factor (K) defined in the Universal Soil Loss Equation (USLE) was estimated based on five soil properties (silt and very fine sand, coarse sand, organic matter, soil structure code, and soil permeability). Gully development in each drainage area was quantified using its volume and soil loss. The dependency of gully development on drainage area characteristics (surface area, land use, tillage direction, and soil properties) was determined using correlation matrix analysis. Based on the results, gully length was the most important morphometric characteristic indicating the development of gully erosion in the lands. Gully development in the area was related to slope gradient (r= -0.26), surface area (r= 0.71), the area of rainfed lands (r= 0.23), and the area of rainfed tilled along the slope (r= 0.24). Nevertheless, its correlation with the area of pasture and soil erodibility factor (K) was not significant. Among the characteristics of drainage area, surface area is the major factor controlling gully volume in the agricultural land. No significant correlation was found between gully erosion and soil erodibility factor (K) estimated by the Universal Soil Loss Equation (USLE). It seems the estimated soil erodibility can’t describe the susceptibility of the study soils to the gully erosion process. In these soils, aggregate stability and soil permeability are the two soil physical properties that affect the actual soil erodibility and in consequence, these soil properties can control gully erosion in the rainfed lands.Keywords: agricultural area, gully properties, soil structure, USLE
Procedia PDF Downloads 77382 Impact of Lined and Unlined Water Bodies on the Distribution and Abundance of Fresh Water Snails in Certain Governorates in Egypt
Authors: Nahed Mohamed Ismail, Bayomy Mostafa, Ahmed Abdel Kader, Ahmed Mohamed Azzam
Abstract:
Effect of lining watercourses on the distribution and abundance of fresh water snails at two Egyptian governorates, Baheria (new reclaimed area) and Giza was studied. Seasonal survey in lined and unlined sites during two successive years was carried out. Samples of snails and water were collected from each examined site and the ecological conditions were recorded. The collected snails from each site were placed in plastic aquaria and transferred to the laboratory, where they were sorted out, identified, counted and examined for natural infection. The size frequency distribution was calculated for each snail species. Results revealed that snails were represented in all examined watercourses (lined and unlined) at the two tested habitats by 14 species. (Biomphalaria alexandrina, B. glabrata, Bulinus truncatus, Physa acuta. Helisoma duryi, Lymnaea natalensis, Planorbis planorbis, Cleopatra bulimoids, Lanistes carinatus, Bellamya unicolor, Melanoides tuberculata, Theodoxus nilotica, Succinia cleopatra and Gabbiella senaarensis). During spring, the percentage of live (45%) and dead (55%) snail species was extremely highly significant lower (p>0.001) in lined water bodies compared to the unlined ones (93.5% and 6.5%, respectively) in the examined sites at Baheria. At Giza, the percentage values of live snail species from all lined watercourses (82.6% and 60.2%, during winter and spring, respectively) was significantly lower (p>0.05 & p>0.01) than those in unlined ones (91.1% and 79%, respectively). Size frequency distribution of snails collected from the lined and unlined water bodies at Baheria and Giza governorates during all seasons revealed that during survey, snail populations were stable and the recruitment of young to adult was continuing for some species, where the recruits were observed with adults. However, there was no sign of small snails occurrence in case of B. glabrata and B. alexandrina during autumn, winter and spring and disappear during summer at Giza. Meanwhile they completely absent during all seasons at Baheria Governorate. Chemical analysis of some heavy metals of water samples collected from lined and unlined sites from Baheria and Giza governorates during autumn, winter and spring were approximately as the same in both lined and unlined water bodies. However, Zn and Fe were higher in lined sites (0.78±0.37and 17.4 ± 4.3, respectively) than that of unlined ones (0.4±0.1 and 10.95 ± 1.93, respectively) and Cu was absent in both lined and unlined sites during summer at Baheria governorate. At Giza, Cu and Pb were absent and Fe were higher in lined sites (4.7± 4.2) than that of unlined ones (2.5 ± 1.4) during summer. Statistical analysis showed that no significant difference in all physico-chemical parameters of water in lined and unlined water bodies at the two tested habitats during all seasons. However, it was found that the water conductivity and TDS showed a lower mean values in lined sites than those of unlined ones. Thus, the present obtained data support the concept of utilizing environmental modification such as lining of water courses to help in minimizing the population density of certain vector snails and consequently reduce the transmission of snails born diseases.Keywords: lining, fresh water, snails, watercourses
Procedia PDF Downloads 254381 Multi-Scale Modeling of Ti-6Al-4V Mechanical Behavior: Size, Dispersion and Crystallographic Texture of Grains Effects
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vidal, Farhad Rezai-Aria, Christine Boher
Abstract:
Ti-6Al-4V titanium alloy is one of the most widely used materials in aeronautical and aerospace industries. Because of its high specific strength, good fatigue, and corrosion resistance, this alloy is very suitable for moderate temperature applications. At room temperature, Ti-6Al-4V mechanical behavior is generally controlled by the behavior of alpha phase (beta phase percent is less than 8%). The plastic strain of this phase notably based on crystallographic slip can be hindered by various obstacles and mechanisms (crystal lattice friction, sessile dislocations, strengthening by solute atoms and grain boundaries…). The grains aspect of alpha phase (its morphology and texture) and the nature of its crystallographic lattice (which is hexagonal compact) give to plastic strain heterogeneous, discontinuous and anisotropic characteristics at the local scale. The aim of this work is to develop a multi-scale model for Ti-6Al-4V mechanical behavior using crystal plasticity approach; this multi-scale model is used then to investigate grains size, dispersion of grains size, crystallographic texture and slip systems activation effects on Ti-6Al-4V mechanical behavior under monotone quasi-static loading. Nine representative elementary volume (REV) are built for taking into account the physical elements (grains size, dispersion and crystallographic) mentioned above, then boundary conditions of tension test are applied. Finally, simulation of the mechanical behavior of Ti-6Al-4V and study of slip systems activation in alpha phase is reported. The results show that the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior of Ti-6Al-4V alloy modeled. The grains size influences also on mechanical proprieties of Ti-6Al-4V, especially on the yield stress; by decreasing of the grain size, the yield strength increases. Finally, the grains' distribution which characterizes the morphology aspect (homogeneous or heterogeneous) gives to the deformation fields distribution enough heterogeneity because the crystallographic slip is easier in large grains compared to small grains, which generates a localization of plastic deformation in certain areas and a concentration of stresses in others.Keywords: multi-scale modeling, Ti-6Al-4V alloy, crystal plasticity, grains size, crystallographic texture
Procedia PDF Downloads 157380 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application
Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal
Abstract:
This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism
Procedia PDF Downloads 137379 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis
Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos
Abstract:
Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis
Procedia PDF Downloads 140378 Effects of Polydispersity on the Glass Transition Dynamics of Aqueous Suspensions of Soft Spherical Colloidal Particles
Authors: Sanjay K. Behera, Debasish Saha, Paramesh Gadige, Ranjini Bandyopadhyay
Abstract:
The zero shear viscosity (η₀) of a suspension of hard sphere colloids characterized by a significant polydispersity (≈10%) increases with increase in volume fraction (ϕ) and shows a dramatic increase at ϕ=ϕg with the system entering a colloidal glassy state. Fragility which is the measure of the rapidity of approach of these suspensions towards the glassy state is sensitive to its size polydispersity and stiffness of the particles. Soft poly(N-isopropylacrylamide) (PNIPAM) particles deform in the presence of neighboring particles at volume fraction above the random close packing volume fraction of undeformed monodisperse spheres. Softness, therefore, enhances the packing efficiency of these particles. In this study PNIPAM particles of a nearly constant swelling ratio and with polydispersities varying over a wide range (7.4%-48.9%) are synthesized to study the effects of polydispersity on the dynamics of suspensions of soft PNIPAM colloidal particles. The size and polydispersity of these particles are characterized using dynamic light scattering (DLS) and scanning electron microscopy (SEM). As these particles are deformable, their packing in aqueous suspensions is quantified in terms of effective volume fraction (ϕeff). The zero shear viscosity (η₀) data of these colloidal suspensions, estimated from rheometric experiments as a function of the effective volume fraction ϕeff of the suspensions, increases with increase in ϕeff and shows a dramatic increase at ϕeff = ϕ₀. The data for η₀ as a function of ϕeff fits well to the Vogel-Fulcher-Tammann equation. It is observed that increasing polydispersity results in increasingly fragile supercooled liquid-like behavior, with the parameter ϕ₀, extracted from the fits to the VFT equation shifting towards higher ϕeff. The observed increase in fragility is attributed to the prevalence of dynamical heterogeneities (DHs) in these polydisperse suspensions, while the simultaneous shift in ϕ₀ is ascribed to the decoupling of the dynamics of the smallest and largest particles. Finally, it is observed that the intrinsic nonlinearity of these suspensions, estimated at the third harmonic near ϕ₀ in Fourier transform oscillatory rheological experiments, increases with increase in polydispersity. These results are in agreement with theoretical predictions and simulation results for polydisperse hard sphere colloidal glasses and clearly demonstrate that jammed suspensions of polydisperse colloidal particles can be effectively fluidized with increasing polydispersity. Suspensions of these particles are therefore excellent candidates for detailed experimental studies of the effects of polydispersity on the dynamics of glass formation.Keywords: dynamical heterogeneity, effective volume fraction, fragility, intrinsic nonlinearity
Procedia PDF Downloads 164377 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department
Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin
Abstract:
Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department
Procedia PDF Downloads 192376 Investigation of Alumina Membrane Coated Titanium Implants on Osseointegration
Authors: Pinar Erturk, Sevde Altuntas, Fatih Buyukserin
Abstract:
In order to obtain an effective integration between an implant and a bone, implant surfaces should have similar properties to bone tissue surfaces. Especially mimicry of the chemical, mechanical and topographic properties of the implant to the bone is crucial for fast and effective osseointegration. Titanium-based biomaterials are more preferred in clinical use, and there are studies of coating these implants with oxide layers that have chemical/nanotopographic properties stimulating cell interactions for enhanced osseointegration. There are low success rates of current implantations, especially in craniofacial implant applications, which are large and vital zones, and the oxide layer coating increases bone-implant integration providing long-lasting implants without requiring revision surgery. Our aim in this study is to examine bone-cell behavior on titanium implants with an aluminum oxide layer (AAO) on effective osseointegration potential in the deformation of large zones with difficult spontaneous healing. In our study, aluminum layer coated titanium surfaces were anodized in sulfuric, phosphoric, and oxalic acid, which are the most common used AAO anodization electrolytes. After morphologic, chemical, and mechanical tests on AAO coated Ti substrates, viability, adhesion, and mineralization of adult bone cells on these substrates were analyzed. Besides with atomic layer deposition (ALD) as a sensitive and conformal technique, these surfaces were coated with pure alumina (5 nm); thus, cell studies were performed on ALD-coated nanoporous oxide layers with suppressed ionic content too. Lastly, in order to investigate the effect of the topography on the cell behavior, flat non-porous alumina layers on silicon wafers formed by ALD were compared with the porous ones. Cell viability ratio was similar between anodized surfaces, but pure alumina coated titanium and anodized surfaces showed a higher viability ratio compared to bare titanium and bare anodized ones. Alumina coated titanium surfaces, which anodized in phosphoric acid, showed significantly different mineralization ratios after 21 days over other bare titanium and titanium surfaces which anodized in other electrolytes. Bare titanium was the second surface that had the highest mineralization ratio. Otherwise, titanium, which is anodized in oxalic acid electrolyte, demonstrated the lowest mineralization. No significant difference was shown between bare titanium and anodized surfaces except AAO titanium surface anodized in phosphoric acid. Currently, osteogenic activities of these cells on the genetic level are investigated by quantitative real-time polymerase chain reaction (qRT-PCR) analysis results of RUNX-2, VEGF, OPG, and osteopontin genes. Also, as a result of the activities of the genes mentioned before, Western Blot will be used for protein detection. Acknowledgment: The project is supported by The Scientific and Technological Research Council of Turkey.Keywords: alumina, craniofacial implant, MG-63 cell line, osseointegration, oxalic acid, phosphoric acid, sulphuric acid, titanium
Procedia PDF Downloads 131375 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence
Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne
Abstract:
Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience
Procedia PDF Downloads 148374 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 277373 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development
Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo
Abstract:
Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.Keywords: maritime training, oil spill response, simulation, vessel manoeuvring
Procedia PDF Downloads 172372 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 164