Search results for: magnitude of agreement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2326

Search results for: magnitude of agreement

2176 Finite Element Modeling of the Effects of Loss of Rigid Pavements Slab Support Due to Built-In Curling

Authors: Ali Ashtiani, Cesar Carrasco

Abstract:

Accurate determination of thermo-mechanical responses of jointed concrete pavement slabs is essential to implement an effective mechanistic design. Temperature-induced curling of concrete slabs can produce premature top-down cracking in rigid pavements. Curling of concrete slabs can result from daily temperature variation through the slab thickness. The slab curling can also result from temperature gradients due hot weather construction, drying shrinkage and creep that are permanently built into the slabs. The existence of permanent curling implies that concrete slabs are not flat at zero temperature gradient. In this case, slabs may not be in full contact with the underlying base layer when subjecting to traffic. Built-in curling can be a major factor producing loss of slab support. The magnitude of stresses induced in slabs is influenced by the stiffness of the underlying foundation layers and the contact condition along the slab-foundation interface. An approach for finite element modeling of the effect of loss of slab support due to built-in curling is presented in this paper. A series of parametric studies is carried out for a pavement system loaded with a combination of traffic and thermal loads, considering different built-in curling and different foundation rigidities. The results explain the effect of loss of support in the magnitude of stresses produced in concrete slabs. The results of parametric study can also be used to evaluate whether the governing equations that are used to idealize the behavior of jointed concrete pavements and the effect of loss of support have been accurately selected and implemented in the finite element model.

Keywords: built-in curling, finite element modeling, loss of slab support, rigid pavement

Procedia PDF Downloads 126
2175 Adsorption and Desorption Behavior of Ionic and Nonionic Surfactants on Polymer Surfaces

Authors: Giulia Magi Meconi, Nicholas Ballard, José M. Asua, Ronen Zangi

Abstract:

Experimental and computational studies are combined to elucidate the adsorption proprieties of ionic and nonionic surfactants on hydrophobic polymer surface such us poly(styrene). To present these two types of surfactants, sodium dodecyl sulfate and poly(ethylene glycol)-block-poly(ethylene), commonly utilized in emulsion polymerization, are chosen. By applying quartz crystal microbalance with dissipation monitoring it is found that, at low surfactant concentrations, it is easier to desorb (as measured by rate) ionic surfactants than nonionic surfactants. From molecular dynamics simulations, the effective, attractive force of these nonionic surfactants to the surface increases with the decrease of their concentration, whereas, the ionic surfactant exhibits mildly the opposite trend. The contrasting behavior of ionic and nonionic surfactants critically relies on two observations obtained from the simulations. The first is that there is a large degree of interweavement between head and tails groups in the adsorbed layer formed by the nonionic surfactant (PEO/PE systems). The second is that water molecules penetrate this layer. In the disordered layer, these nonionic surfactants generate at the surface, only oxygens of the head groups present at the interface with the water phase or oxygens next to the penetrating waters can form hydrogen bonds. Oxygens inside this layer lose this favorable energy, with a magnitude that increases with the surfactants density at the interface. This reduced stability of the surfactants diminishes their driving force for adsorption. All that is shown to be in accordance with experimental results on the dynamics of surfactants desorption. Ionic surfactants assemble into an ordered structure and the attraction to the surface was even slightly augmented at higher surfactant concentration, in agreement with the experimentally determined adsorption isotherm. The reason these two types of surfactants behave differently is because the ionic surfactant has a small head group that is strongly hydrophilic, whereas the head groups of the nonionic surfactants are large and only weakly attracted to water.

Keywords: emulsion polymerization process, molecular dynamics simulations, polymer surface, surfactants adsorption

Procedia PDF Downloads 314
2174 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 109
2173 The Effects of a Nursing Dignity Care Program on Patients’ Dignity in Care

Authors: Yea-Pyng Lin

Abstract:

Dignity is a core element of nursing care. Maintaining the dignity of patients is an important issue because the health and recovery of patients can be adversely affected by a lack of dignity in their care. The aim of this study was to explore the effects of a nursing dignity care program upon patients’ dignity in care. A quasi-experimental research design was implemented. Nurses were recruited by purposive sampling, and their patients were recruited by simple random sampling. Nurses in the experimental group received the nursing educational program on dignity care, while nurses in the control group received in-service education as usual. Data were collected via two instruments: the dignity in care scale for nurses and the dignity in care scale to patients, both of which were developed by the researcher. Both questionnaires consisted of three domains: agreement, importance, and frequencies of providing dignity care. A total of 178 nurses in the experimental group and 193 nurses in the control group completed the pretest and the follow-up evaluations at the first month, the third month, and the sixth month. The number of patients who were cared for by the nurses in the experimental group was 94 in the pretest. The number of patients in the post-test at the first, third, and sixth months were 91, 85, and 77, respectively. In the control group, 88 patients completed the II pretest, and 80 filled out the post-test at the first month, 77 at the third, and 74 at the sixth month. The major findings revealed the scores of agreement domain among nurses in the experimental group were found significantly different from those who in the control group at each point of time. The scores of importance domain between these two groups also displayed significant differences at pretest and the first month of post-test. Moreover, the frequencies of proving dignity care to patients were significant at pretest, the third month and sixth month of post-test. However, the experimental group had only significantly different from those who in the control group on the frequencies of receiving dignity care especially in the items of ‘privacy care,’ ‘communication care,’ and ‘emotional care’ for the patients. The results show that the nursing program on dignity care could increase nurses’ dignity care for patients in three domains of agreement, importance, and frequencies of providing dignity care. For patients, only the frequencies of receiving dignity care were significantly increased. Therefore, the nursing program on dignity care could be applicable for nurses’ in-service education and practice to enhance the ability of nurses to care for patient’s dignity.

Keywords: nurses, patients, dignity care, quasi-experimental, nursing education

Procedia PDF Downloads 433
2172 Analysis and Quantification of Historical Drought for Basin Wide Drought Preparedness

Authors: Joo-Heon Lee, Ho-Won Jang, Hyung-Won Cho, Tae-Woong Kim

Abstract:

Drought is a recurrent climatic feature that occurs in virtually every climatic zone around the world. Korea experiences the drought almost every year at the regional scale mainly during in the winter and spring seasons. Moreover, extremely severe droughts at a national scale also occurred at a frequency of six to seven years. Various drought indices had developed as tools to quantitatively monitor different types of droughts and are utilized in the field of drought analysis. Since drought is closely related with climatological and topographic characteristics of the drought prone areas, the basins where droughts are frequently occurred need separate drought preparedness and contingency plans. In this study, an analysis using statistical methods was carried out for the historical droughts occurred in the five major river basins in Korea so that drought characteristics can be quantitatively investigated. It was also aimed to provide information with which differentiated and customized drought preparedness plans can be established based on the basin level analysis results. Conventional methods which quantifies drought execute an evaluation by applying a various drought indices. However, the evaluation results for same drought event are different according to different analysis technique. Especially, evaluation of drought event differs depend on how we view the severity or duration of drought in the evaluation process. Therefore, it was intended to draw a drought history for the most severely affected five major river basins of Korea by investigating a magnitude of drought that can simultaneously consider severity, duration, and the damaged areas by applying drought run theory with the use of SPI (Standardized Precipitation Index) that can efficiently quantifies meteorological drought. Further, quantitative analysis for the historical extreme drought at various viewpoints such as average severity, duration, and magnitude of drought was attempted. At the same time, it was intended to quantitatively analyze the historical drought events by estimating the return period by derived SDF (severity-duration-frequency) curve for the five major river basins through parametric regional drought frequency analysis. Analysis results showed that the extremely severe drought years were in the years of 1962, 1988, 1994, and 2014 in the Han River basin. While, the extreme droughts were occurred in 1982 and 1988 in the Nakdong river basin, 1994 in the Geumg basin, 1988 and 1994 in Youngsan river basin, 1988, 1994, 1995, and 2000 in the Seomjin river basin. While, the extremely severe drought years at national level in the Korean Peninsula were occurred in 1988 and 1994. The most damaged drought were in 1981~1982 and 1994~1995 which lasted for longer than two years. The return period of the most severe drought at each river basin was turned out to be at a frequency of 50~100 years.

Keywords: drought magnitude, regional frequency analysis, SPI, SDF(severity-duration-frequency) curve

Procedia PDF Downloads 375
2171 Electrochemical Determination of Caffeine Content in Ethiopian Coffee Samples Using Lignin Modified Glassy Carbon Electrode

Authors: Meareg Amare, Senait Aklog

Abstract:

Lignin film was deposited at the surface of the glassy carbon electrode potential-statically. In contrast to the unmodified glassy carbon electrode, an oxidative peak with an improved current and overpotential for caffeine at the modified electrode showed catalytic activity of the modifier towards oxidation of caffeine. Linear dependence of peak current on caffeine concentration in the range 6 × 10⁻⁶ to 100 × 10⁻⁶ mol L⁻¹ with determination coefficient and method detection limit (LoD = 3 s/slope) of 0.99925 and 8.37 × 10⁻⁷ mol L⁻¹, respectively, supplemented by recovery results of 93.79–102.17%, validated the developed method. An attempt was made to determine the caffeine content of aqueous coffee extracts of Ethiopian coffees grown in four coffee cultivating localities (Wonbera, Wolega, Finoteselam, and Zegie) and hence to evaluate the correlation between users preference and caffeine content. In agreement with reported works, caffeine contents (w/w%) of 0.164 in Wonbera coffee; 0.134 in Wolega coffee; 0.097 in Finoteselam coffee; and 0.089 in Zegie coffee were detected, confirming the applicability of the developed method for determination of caffeine in a complex matrix environment. The result indicated that users’ highest preference for Wonbera and least preference for Zegie cultivated coffees are in agreement with the caffeine content.

Keywords: electrochemical, lignin, caffeine, electrode

Procedia PDF Downloads 80
2170 Understanding ASPECTS of Stroke: Interrater Reliability between Emergency Medicine Physician and Radiologist in a Rural Setup

Authors: Vineel Inampudi, Arjun Prakash, Joseph Vinod

Abstract:

Aims and Objectives: To evaluate the interrater reliability in grading ASPECTS score, between emergency medicine physician at first contact and radiologist among patients with acute ischemic stroke. Materials and Methods: We conducted a retrospective analysis of 86 acute ischemic stroke cases referred to the Department of Radiodiagnosis during November 2014 to January 2016. The imaging (plain CT scan) was performed using GE Bright Speed Elite 16 Slice CT Scanner. ASPECTS score was calculated separately by an emergency medicine physician and radiologist. Interrater reliability for total and dichotomized ASPECTS (≥ 6 and < 6) scores were assessed using statistical analysis (ICC and Cohen ĸ coefficients) on SPSS software (v17.0). Results: Interrater agreement for total and dichotomized ASPECTS was substantial (ICC 0.79 and Cohen ĸ 0.68) between the emergency physician and radiologist. Mean difference in ASPECTS between the two readers was only 0.15 with standard deviation of 1.58. No proportionality bias was detected. Bland Altman plot was constructed to demonstrate the distribution of ASPECT differences between the two readers. Conclusion: Substantial interrater agreement was noted in grading ASPECTS between emergency medicine physician at first contact and radiologist thereby confirming its robustness even in a rural setting.

Keywords: ASPECTS, computed tomography, MCA territory, stroke

Procedia PDF Downloads 209
2169 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 25
2168 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 141
2167 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 107
2166 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 126
2165 Commissioning of a Flattening Filter Free (FFF) using an Anisotropic Analytical Algorithm (AAA)

Authors: Safiqul Islam, Anamul Haque, Mohammad Amran Hossain

Abstract:

Aim: To compare the dosimetric parameters of the flattened and flattening filter free (FFF) beam and to validate the beam data using anisotropic analytical algorithm (AAA). Materials and Methods: All the dosimetric data’s (i.e. depth dose profiles, profile curves, output factors, penumbra etc.) required for the beam modeling of AAA were acquired using the Blue Phantom RFA for 6 MV, 6 FFF, 10MV & 10FFF. Progressive resolution Optimizer and Dose Volume Optimizer algorithm for VMAT and IMRT were are also configured in the beam model. Beam modeling of the AAA were compared with the measured data sets. Results: Due to the higher and lover energy component in 6FFF and 10 FFF the surface doses are 10 to 15% higher compared to flattened 6 MV and 10 MV beams. FFF beam has a lower mean energy compared to the flattened beam and the beam quality index were 6 MV 0.667, 6FFF 0.629, 10 MV 0.74 and 10 FFF 0.695 respectively. Gamma evaluation with 2% dose and 2 mm distance criteria for the Open Beam, IMRT and VMAT plans were also performed and found a good agreement between the modeled and measured data. Conclusion: We have successfully modeled the AAA algorithm for the flattened and FFF beams and achieved a good agreement with the calculated and measured value.

Keywords: commissioning of a Flattening Filter Free (FFF) , using an Anisotropic Analytical Algorithm (AAA), flattened beam, parameters

Procedia PDF Downloads 275
2164 Mitigating Climate Change: Cross-Country Variation in Policy Ambition

Authors: Mohammad Aynal Haque

Abstract:

Under the international cooperation — Paris Agreement — countries outline their self-determined policy ambition for emissions reduction in their Nationally Determined Contributions (NDCs) as a key to addressing climate change globally. Although practically all countries commit themselves to reach the Paris landmark (below 20 C) globally, some act as climate leaders, others behave as followers, and others turn out to be climate laggards. As a result, there is a substantial variation in ‘emissions reduction targets’ across countries. Thus, a question emerges: What explains this variation? Or why do some countries opt for higher while others opt for lower ‘emissions reduction targets toward global mitigation efforts? Conceptualizing the ‘emissions reduction targets by 2030’ outlined in NDCs by each country as the climate policy ambition (CPA), this paper explores how certain national political, economic, environmental, and external factors play vital roles in determining climate policy ambition. Based on the cross-country regression analysis among 168 countries, this study finds that democracy, vulnerability to climate change effects, and foreign direct investment have substantial effects on CPA. The paper also finds that resource capacity has a minimal negative effect on CPA across developed countries.

Keywords: climate change, Paris agreement, international cooperation, political economy, environmental politics, NDCs

Procedia PDF Downloads 44
2163 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study

Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic

Abstract:

Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.

Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS

Procedia PDF Downloads 278
2162 Meeting India's Energy Demand: U.S.-India Energy Cooperation under Trump

Authors: Merieleen Engtipi

Abstract:

India's total share of global population is nearly 18%; however, its per capita energy consumption is only one-third of global average. The demand and supply of electricity are uneven in the country; around 240 million of the population have no access to electricity. However, with India's trajectory for modernisation and economic growth, the demand for energy is only expected to increase. India is at a crossroad, on the one hand facing the increasing demand for energy and on the other hand meeting the Paris climate policy commitments, and further the struggle to provide efficient energy. This paper analyses the policies to meet India’s need for energy, as the per capita energy consumption is likely to be double in 6-7 years period. Simultaneously, India's Paris commitment requires curbing of carbon emission from fossil fuels. There is an increasing need for renewables to be cheaply and efficiently available in the market and for clean technology to extract fossil fuels to meet climate policy goals. Fossil fuels are the most significant generator of energy in India; with the Paris agreement, the demand for clean energy technology is increasing. Finally, the U.S. decided to withdraw from the Paris Agreement; however, the two countries plan to continue engaging bilaterally on energy issues. The U.S. energy cooperation under Trump administration is significantly vital for greater energy security, transfer of technology and efficiency in energy supply and demand.

Keywords: energy demand, energy cooperation, fossil fuels, technology transfer

Procedia PDF Downloads 220
2161 Conceptualising Project Complexity in Ghana’s Construction Industry: A Qualitative Study

Authors: Kwasi Dartey-Baah, Mias De Klerk

Abstract:

Project complexity has been cited as one of the essential areas of project management. It can be observed from environmental, social, technological, and organisational viewpoints, and its handling is critical to project success. Conceptualised in varied industries, this paper seeks to ascertain the meaning and understanding of project complexity within the Ghanaian construction industry based on the three dimensions of complexities (faith, fact, and interaction) using experts' opinions. Taking the form of a focus group discussion, the paper sought to gain an in-depth understanding of project complexity issues in Ghana’s construction industry. The method use obtained data from experts (a purposely selected group) comprising project leaders and project management academics. The findings indicated that the experts broadly agreed with the complexity items but offered varied reasons for their agreement. In the composite assessment of the complexity dimensions of (faith, fact, and interaction), it emerged that there was some agreement with the complexity dimensions of fact and interaction within Ghana’s construction industry. On the other hand, with the dimension for complexity by faith, it was noted that the experts in Ghana’s construction construed complexity by faith, not as the absence of evidence but the evidence that hinges on at least a member of the project team. It is expected that other researches on project complexity will focus on other industries to enhance the knowledge of the same within the field of project management.

Keywords: project complexity, complexity by faith, complexity by fact, complexity by interaction, construction industry, Ghana

Procedia PDF Downloads 129
2160 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection

Authors: Mondher Yahyaoui

Abstract:

A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.

Keywords: aileron deflection, camber-surface-bound vortices, classical VLM, generalized VLM, flap deflection

Procedia PDF Downloads 410
2159 CFD Studies on Forced Convection Nanofluid Flow Inside a Circular Conduit

Authors: M. Khalid, W. Rashmi, L. L. Kwan

Abstract:

This work provides an overview on the experimental and numerical simulations of various nanofluids and their flow and heat transfer behavior. It was further extended to study the effect of nanoparticle concentration, fluid flow rates and thermo-physical properties on the heat transfer enhancement of Al2O3/water nanofluid in a turbulent flow circular conduit using ANSYS FLUENT™ 14.0. Single-phase approximation (homogeneous model) and two-phase (mixture and Eulerian) models were used to simulate the nanofluid flow behavior in the 3-D horizontal pipe. The numerical results were further validated with experimental correlations reported in the literature. It was found that heat transfer of nanofluids increases with increasing particle volume concentration and Reynolds number, respectively. Results showed good agreement (~9% deviation) with the experimental correlations, especially for a single-phase model with constant properties. Among two-phase models, mixture model (~14% deviation) showed better prediction compared to Eulerian-dispersed model (~18% deviation) when temperature independent properties were used. Non-drag forces were also employed in the Eulerian two-phase model. However, the two-phase mixture model with temperature dependent nanofluid properties gave slightly closer agreement (~12% deviation).

Keywords: nanofluid, CFD, heat transfer, forced convection, circular conduit

Procedia PDF Downloads 495
2158 Non-Linear Velocity Fields in Turbulent Wave Boundary Layer

Authors: Shamsul Chowdhury

Abstract:

The objective of this paper is to present the detailed analysis of the turbulent wave boundary layer produced by progressive finite-amplitude waves theory. Most of the works have done for the mass transport in the turbulent boundary layer assuming the eddy viscosity is not time varying, where the sediment movement is induced by the mean velocity. Near the ocean bottom, the waves produce a thin turbulent boundary layer, where the flow is highly rotational, and shear stress associated with the fluid motion cannot be neglected. The magnitude and the predominant direction of the sediment transport near the bottom are known to be closely related to the flow in the wave induced boundary layer. The magnitude of water particle velocity at the Crest phase differs from the one of the Trough phases due to the non-linearity of the waves, which plays an important role to determine the sediment movement. The non-linearity of the waves become predominant in the surf zone area, where the sediment movement occurs vigorously. Therefore, in order to describe the flow near the bottom and relationship between the flow and the movement of the sediment, the analysis was done using the non-linear boundary layer equation and the finite amplitude wave theory was applied to represent the velocity fields in the turbulent wave boundary layer. At first, the calculation was done for turbulent wave boundary layer by two-dimensional model where throughout the calculation is non-linear. But Stokes second order wave profile is adopted at the upper boundary. The calculated profile was compared with the experimental data. Finally, the calculation is done based on various modes of the velocity and turbulent energy. The mean velocity is found to differ from condition of the relative depth and the roughness. It is also found that due to non-linearity, the absolute value for velocity and turbulent energy as well as Reynolds stress are asymmetric. The mean velocity of the laminar boundary layer is always positive but in the turbulent boundary layer plays a very complicated role.

Keywords: wave boundary, mass transport, mean velocity, shear stress

Procedia PDF Downloads 237
2157 Intergenerational Influences on Automobile Brand Preferences in Pakistan

Authors: Amena Sibghatullah

Abstract:

The purpose of this study was to examine the existence of Inter-generational Influence (IGI) between two successive generations in the selection of automobile brands. IGI was examined between mother-daughter dyads and father-son dyads. A total sample of 320 respondents (80 fathers and their 80 sons, 80 mothers, and their 80 daughters) from the upper-middle class was selected. Three important findings from this study are; (a) the difference in proportion of agreements Brand-In-Use versus Brand-In-Mind appeared to be statistically significant in the Automobile product category. Thus agreements Brand-In-Use situation between parent and child has more agreements than Brand-In-Mind situation; (b) the difference in proportions between women and men (women means mother-daughter dyad agreement, and men means father-son dyad agreement) is statistically significant in automobile brand preferences. This means that mother-daughter dyad brand preferences, both brand-in-mind and brand-in-use are more significant than that of a father-son dyad, and (c) dominance of the top three brands has been exhibited in automobiles both Brand-In-Use and Brand-In-Mind. These three brands hold more than 57% of auto brand preferences. This means that the three brands occupy distinct and strong positions in the minds of consumers. These results reflect that there is significant evidence of IGI presence between parent and adult child. Marketers of auto brands need to understand this sort of influence on their target consumers.

Keywords: autombile brands, branding, intergenerational influence, preferences

Procedia PDF Downloads 104
2156 Turkey-Syria Relations between 2002-2011 from the Perspective of Social Construction

Authors: Didem Aslantaş

Abstract:

In this study, the reforms carried out by the Justice and Development Party, which came to power in 2002, and how the foreign policy understanding it transformed reflected on the relations with Syria will be analyzed from the social constructivist theory. Contrary to the increasing security concerns of the states after the September 11 attacks, the main problem of the research is how the relations between Syria and Turkey developed and how they progressed in non-security dimensions. In order to find an answer to this question, the basic assumptions of the constructivist theory will be used. Since there is a limited number of studies in the literature, a comparative analysis of the Adana Consensus and the Cooperation Agreement between the Republic of Turkey and the Syrian Arab Republic, and the Joint Cooperation Agreement Against Terrorism and Terrorist Organizations will be included. In order to answer the main problem of the research and to support the arguments, document and archive scanning methods from qualitative research methods will be used. In the first part of the study, what the social constructivist theory is and its basic assumptions are explained, while in the second part, Turkey-Syria relations between 2002-2011 are included. In the third and last part, the relations between the two countries will be tried to be read through social constructivism by referring to the foreign policy features of the Ak Party period.

Keywords: Social Constructivist Theory, foreign policy analysis, Justice and Development Party, Syria

Procedia PDF Downloads 59
2155 Bilingual Identities of Kuwaiti Students at Universities with EMI

Authors: Marta Tryzna, Shahd Al Shammari

Abstract:

Though Modern Standard Arabic (MSA) is the only official language in GCC states, including Kuwait, and traditionally the preferred vehicle for literacy in the Arab countries, recent studies in Qatar and the UAE observe a growing role of English, particularly in literacy and knowledge transmission contexts. The present study examines the attitudes to Arabic and English and the use of both languages in literacy-related domains based on a sample of bilingual Arabic-English undergraduates (N=522) at a private university with EMI in Kuwait. The results indicate that Arabic (Kuwaiti dialect) is associated with familial interactions, Arabic-English bilingualism predominates in interactions with classmates, friends, on social media and at work, while English is prevalent in literacy-related contexts such as reading books, magazines, or online material, domains traditionally associated with MSA. Attitudes towards Arabic and English are equally positive according to the majority of the respondents, who report being comfortable expressing themselves and projecting their identity in both languages. No statistically significant differences were found comparing the importance of Arabic and English in the sample. Future trends were identified based on high agreement on the importance of speaking English with children and low agreement on speaking only Arabic at home. The study corroborates recently observed trends in the GCC favoring bilingualism across personal, academic and professional domains, with English becoming the preferred language of literacy among young bilingual Kuwaitis.

Keywords: bilingual, English, Arabic, EMI, identity

Procedia PDF Downloads 114
2154 Development of Electronic Waste Management Framework at College of Design Art, Design and Technology

Authors: Wafula Simon Peter, Kimuli Nabayego Ibtihal, Nabaggala Kimuli Nashua

Abstract:

The worldwide use of information and communications technology (ICT) equipment and other electronic equipment is growing and consequently, there is a growing amount of equipment that becomes waste after its time in use. This growth is expected to accelerate since equipment lifetime decreases with time and growing consumption. As a result, e-waste is one of the fastest-growing waste streams globally. The United Nations University (UNU) calculates in its second Global E-waste Monitor 44.7 million metric tonnes (Mt) of e-waste were generated globally in 2016. The study population was 80 respondents, from which a sample of 69 respondents was selected using simple and purposive sampling techniques. This research was carried out to investigate the problem of e-waste and come up with a framework to improve e-waste management. The objective of the study was to develop a framework for improving e-waste management at the College of Engineering, Design, Art and Technology (CEDAT). This was achieved by breaking it down into specific objectives, and these included the establishment of the policy and other Regulatory frameworks being used in e-waste management at CEDAT, the determination of the effectiveness of the e-waste management practices at CEDAT, the establishment of the critical challenges constraining e-waste management at the College, development of a framework for e-waste management. The study reviewed the e-waste regulatory framework used at the college and then collected data which was used to come up with a framework. The study also established that weak policy and regulatory framework, lack of proper infrastructure, improper disposal of e-waste and a general lack of awareness of the e-waste and the magnitude of the problem are the critical challenges of e-waste management. In conclusion, the policy and regulatory framework should be revised, localized and strengthened to contextually address the problem. Awareness campaigns, the development of proper infrastructure and extensive research to establish the volumes and magnitude of the problems will come in handy. The study recommends a framework for the improvement of e-waste.

Keywords: e-waste, treatment, disposal, computers, model, management policy and guidelines

Procedia PDF Downloads 51
2153 Shear Layer Investigation through a High-Load Cascade in Low-Pressure Gas Turbine Conditions

Authors: Mehdi Habibnia Rami, Shidvash Vakilipour, Mohammad H. Sabour, Rouzbeh Riazi, Hossein Hassannia

Abstract:

This paper deals with the steady and unsteady flow behavior on the separation bubble occurring on the rear portion of the suction side of T106A blade. The first phase was to implement the steady condition capturing the separation bubble. To accurately predict the separated region, the effects of three different turbulence models and computational grids were separately investigated. The results of Large Eddy Simulation (LES) model on the finest grid structure are acceptably in a good agreement with its relevant experimental results. The second phase is mainly to address the effects of wake entrance on bubble disappearance in unsteady situation. In the current simulations, from what was suggested in an experiment, simulating the flow unsteadiness, with concentrations on small scale disturbances instead of simulating a complete oncoming wake, is the key issue. Subsequently, the results from the current strategy to apply the effects of the wake and two other experimental work were compared to be in a good agreement. Between the two experiments, one of them deals with wake passing unsteady flow, and the other one implements experimentally the same approach as the current Computational Fluid Dynamics (CFD) simulation.

Keywords: low-pressure turbine cascade, large-Eddy simulation (LES), RANS turbulence models, unsteady flow measurements, flow separation

Procedia PDF Downloads 275
2152 Non-Invasive Assessment of Peripheral Arterial Disease: Automated Ankle Brachial Index Measurement and Pulse Volume Analysis Compared to Ultrasound Duplex Scan

Authors: Jane E. A. Lewis, Paul Williams, Jane H. Davies

Abstract:

Introduction: There is, at present, a clear and recognized need to optimize the diagnosis of peripheral arterial disease (PAD), particularly in non-specialist settings such as primary care, and this arises from several key facts. Firstly, PAD is a highly prevalent condition. In 2010, it was estimated that globally, PAD affected more than 202 million people and furthermore, this prevalence is predicted to further escalate. The disease itself, although frequently asymptomatic, can cause considerable patient suffering with symptoms such as lower limb pain, ulceration, and gangrene which, in worse case scenarios, can necessitate limb amputation. A further and perhaps the most eminent consequence of PAD arises from the fact that it is a manifestation of systemic atherosclerosis and therefore is a powerful predictor of coronary heart disease and cerebrovascular disease. Objective: This cross sectional study aimed to individually and cumulatively compare sensitivity and specificity of the (i) ankle brachial index (ABI) and (ii) pulse volume waveform (PVW) recorded by the same automated device, with the presence or absence of peripheral arterial disease (PAD) being verified by an Ultrasound Duplex Scan (UDS). Methods: Patients (n = 205) referred for lower limb arterial assessment underwent an ABI and PVW measurement using volume plethysmography followed by a UDS. Presence of PAD was recorded for ABI if < 0.9 (noted if > 1.30) if PVW was graded as 2, 3 or 4 or a hemodynamically significant stenosis > 50% with UDS. Outcome measure was agreement between measured ABI and interpretation of the PVW for PAD diagnosis, using UDS as the reference standard. Results: Sensitivity of ABI was 80%, specificity 91%, and overall accuracy 88%. Cohen’s kappa revealed good agreement between ABI and UDS (k = 0.7, p < .001). PVW sensitivity 97%, specificity 81%, overall accuracy 84%, with a good level of agreement between PVW and UDS (k = 0.67, p < .001). The combined sensitivity of ABI and PVW was 100%, specificity 76%, and overall accuracy 85% (k = 0.67, p < .001). Conclusions: Combing these two diagnostic modalities within one device provided a highly accurate method of ruling out PAD. Such a device could be utilized within the primary care environment to reduce the number of unnecessary referrals to secondary care with concomitant cost savings, reduced patient inconvenience, and prioritization of urgent PAD cases.

Keywords: ankle brachial index, peripheral arterial disease, pulse volume waveform, ultrasound duplex scan

Procedia PDF Downloads 132
2151 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength

Procedia PDF Downloads 480
2150 The Correlation between Head of Bed Angle and IntraAbdominal Pressure of Intubated Patients; a Pre-Post Clinical Trial

Authors: Sedigheh Samimian, Sadra Ashrafi, Tahereh Khaleghdoost Mohammadi, Mohammad Reza Yeganeh, Ali Ashraf, Hamideh Hakimi, Maryam Dehghani

Abstract:

Introduction: The recommended position for measuring Intra-Abdominal Pressure (IAP) is the supine position. However, patients put in this position are prone to Ventilator-associated pneumonia. This study was done to evaluate the relationship between bed head angle and IAP measurements of intubated patients in the intensive care unit. Methods: In this clinical trial, seventy-six critically ill patients under mechanical ventilation were enrolled. IAP measurement was performed every 8 hours for 24 hours using the KORN method in three different degrees of the head of bed (HOB) elevation (0°, 15°, and 30°). Bland-Altman analysis was performed to identify the bias and limits of agreement among the three HOBs. According to World Society of the Abdominal Compartment Syndrome (WSACS), we can consider two IAP techniques equivalent if a bias of <1 mmHg and limits of agreement of - 4 to +4 were found between them. Data were analyzed using SPSS statistical software (v. 19), and the significance level was considered as 0.05. Results: The prevalence of intra-abdominal hypertension was 18.42%. Mean ± standard deviation (SD) of IAP were 8.44 ± 4.02 mmHg for HOB angle 0°, 9.58 ± 4.52 for HOB angle 15°, and 11.10 ± 4.73 for HOB angle 30o (p = 0.0001). The IAP measurement bias between HOB angle 0◦ and HOB angle 15° was 1.13 mmHg. This bias was 2.66 mmHg between HOB angle 0° and HOB angle 30°. Conclusion: Elevation of HOB angle from 0 to 30 degree significantly increases IAP. It seems that the measurement of IAP at HOB angle 15° was more reliable than 30°.

Keywords: pressure, intra-abdominal hypertension, head of bed, critical care, compartment syndrome, supine position

Procedia PDF Downloads 28
2149 The Role and Function of National Land Authority as Mediator in Land Dispute Settlements in Indonesia

Authors: Nia Kurniati, Efa Laela Fakhriah

Abstract:

The regulation in Indonesia provides space for the land dispute to be settled outside the court by the government through National Land. In this case, the bureaucrat of Badan Pertanahan Nasional (BPN) acts as mediator to reach a fair agreement between the disputing parties. Land dispute is from a party who denies the ownership of the other party of a land and denies legal-technical facts written on land certificate published by BPN. Appointing the bureaucrat of BPN as mediator in dispute settlements may possibly create conflict of interest since the object. It has become a concern since bureaucrat of BPN acts as mediator, he will be bias and partial in assisting the dispute settlement, thus the spirit and purposes of mediation will be hampered. This issue triggers to be thoroughly examined further in a relation with the role and function of BPN as land dispute mediator. The methodology used in this research is a normative-legal one with qualitative-legal analytical method. The object of this research is in the form of random sampling of land dispute cases being occurred in some areas. Several principles in mediation have to be made as the base of the consideration to appoint bureaucrat of BPN as mediator since the mediator is an impartial third party, working with both disputing parties and assisting them to reach a fair resolution written in agreement as a foundation of land dispute settlement. The existence of BPN as mediator in land dispute settlement encounters conflict of interest which uphold legal uncertainty to act objectively.

Keywords: Indonesia, land dispute, mediator, national land authority

Procedia PDF Downloads 278
2148 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 294
2147 Variations of Total Electron Content over High Latitude Region during the 24th Solar Cycle

Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini

Abstract:

The effect of solar cycle and seasons on the total electron content has been investigated over high latitude region during 24th solar cycle (2010-2014). The total electron content data has been observed with the help of Global Ionospheric Scintillation and TEC monitoring (GISTM) system installed at Indian permanent scientific 'Maitri station' [70˚46’00”S 11˚43’56” E]. The dependence of TEC over a solar cycle has been examined by the performing linear regression analysis between the vertical total electron content (VTEC) and daily total sunspot numbers (SSN). It has been found that the season and level of geomagnetic activity has a considerable effect on the VTEC. It is observed that the VTEC and SSN follow better agreement during summer seasons as compared to winter and equinox seasons and extraordinary agreement during minimum phase (during the year 2010) of the solar cycle. There is a significant correlation between VTEC and SSN during quiet days of the years as compared to overall days of the years (2010-2014). Further, saturation effect has been observed during maximum phase (during the year 2014) of the 24th solar cycle. It is also found that Ap index and SSN has a linear correlation (R=0.37) and the most of the geomagnetic activity occurs during the declining phase of the solar cycle.

Keywords: high latitude ionosphere, sunspot number, correlation, vertical total electron content

Procedia PDF Downloads 162