Search results for: ground failure
679 The Impact of the Constitution of Myanmar on the Political Power of Aung San Suu Kyi and the Rohingya Conflict
Authors: Nur R. Daut
Abstract:
The objective of this paper is to offer an insight on how political power inequality has contributed and exacerbated the political violence towards the Rohingya ethnic group in Myanmar. In particular, this paper attempts to illustrate how power inequality in the country has prevented Myanmar’s leader Aung San Suu Kyi from taking effective measures on the issue. The research centers on the question of why Aung San Suu Kyi has been seen as not doing enough to stop the persecution of the Rohingya ethnic group ever since she was appointed the State Counsellor to the Myanmar government. As a Nobel Peace Prize laureate, Suu Kyi’s lack of action on the matter has come under severe criticism by the international community. Many have seen this as Suu Kyi’s failure to establish democracy and allowing mass killing to spread in the country. The real question that many perhaps should be asking, however, is how much power Suu Kyi actually holds within the government which is still heavily controlled by the military or Tatmadaw. This paper argues that Suu Kyi’s role within the government is limited which hinders constructive and effective measures to be taken on the Rohingya issue. Political power in this research is being measured by 3 factors: control over events such as burning of Rohingya villages, control over resources such as land ownership and media and control over actors such the Tatmadaw, police force and civil society who are greatly needed to ease and resolve the conflict. In order to illustrate which individuals or institution have control over all the 3 above factors, this paper will first study the constitution of Myanmar. The constitution will also be able to show the asymmetrical power relations as it will provide evidence as to how much political power Suu Kyi holds within the government in comparison to other political actors and institutions. Suu Kyi’s role as a state counsellor akin to a prime minister is a newly created position as the current constitution of Myanmar bars anyone with a foreign spouse from holding the post of a president in the country. This is already an indication of the inequality of political power between Suu Kyi and the military. Apart from studying the constitution of Myanmar, Suu Kyi’s speeches and various interviews are also studied in order to answer the research question. Unfortunately, Suu Kyi’s limited political power also involves the Buddhist monks in Myanmar who have held significant influence throughout the history of the country. This factor further prevents Suu Kyi from preserving the sanctity of human rights in Myanmar.Keywords: Aung San Suu Kyi, constitution of Myanmar, inequality, political power, political violence, Rohingya, Tatmadaw
Procedia PDF Downloads 114678 Is Sodium Channel Nav1.7 an Ideal Therapeutically Analgesic Target? A Systematic Review
Authors: Yutong Wan, John N. Wood
Abstract:
Introduction: SCN9A encoded Nav1.7 is an ideal therapeutic target with minimal side effects for the pharmaceutical industry because SCN9A variants can cause both human gains of function pain-related mutations and loss of function pain-free mutations. This study reviews the clinical effectiveness of existing Nav1.7 inhibitors, which theoretically should be powerful analgesics. Methods: A systematic review is conducted on the effectiveness of current Nav1.7 blockers undergoing clinical trials. Studies were mainly extracted from PubMed, U.S. National Library of Medicine Clinical Trials, World Health Organization International Clinical Trials Registry, ISRCTN registry platform, and Integrated Research Approval System by NHS. Only studies with full text available and those conducted using double-blinded, placebo controlled, and randomised designs and reporting at least one analgesic measurement were included. Results: Overall, 61 trials were screened, and eight studies covering PF 05089771 (Pfizer), TV 45070 (Teva & Xenon), and BIIB074 (Biogen) met the inclusion criteria. Most studies were excluded because results were not published. All three compounds demonstrated insignificant analgesic effects, and the comparison between PF 05089771 and pregabalin/ibuprofen showed that PF 05089771 was a much weaker analgesic. All three drug candidates only have mild side effects, indicating the potentials for further investigation of Nav1.7 antagonists. Discussion: The failure of current Nav1.7 small molecule inhibitors might attribute to ignorance of the key role of endogenous systems in Nav1.7 null mutants, the lack of selectivity and blocking potency, and central impermeability. The synergistic combination of analgesic drugs, a recent UCL patent, combining a small dose of Nav1.7 blockers and opioids or enkephalinase inhibitors dramatically enhanced the analgesic effects. Conclusion: The current clinical testing Nav1.7 blockers are generally disappointing. However, the newer generation of Nav1.7 targeting analgesics has overcome the major constraints of its predecessors.Keywords: chronic pain, Nav1.7 blockers, SCN9A, systematic review
Procedia PDF Downloads 129677 The Magnitude and Associated Factors of Immune Hemolytic Anemia among Human Immuno Deficiency Virus Infected Adults Attending University of Gondar Comprehensive Specialized Hospital North West Ethiopia 2021 GC, Cross Sectional Study Design
Authors: Samul Sahile Kebede
Abstract:
Back ground: -Immune hemolytic anemia commonly affects human immune deficiency, infected individuals. Among anemic HIV patients in Africa, the burden of IHA due to autoantibody was ranged from 2.34 to 3.06 due to the drug was 43.4%. IHA due to autoimmune is potentially a fatal complication of HIV, which accompanies the greatest percent from acquired hemolytic anemia. Objective: -The main aim of this study was to determine the magnitude and associated factors of immune hemolytic anemia among human immuno deficiency virus infected adults at the university of Gondar comprehensive specialized hospital north west Ethiopia from March to April 2021. Methods: - An institution-based cross-sectional study was conducted on 358 human immunodeficiency virus-infected adults selected by systematic random sampling at the University of Gondar comprehensive specialized hospital from March to April 2021. Data for socio-demography, dietary and clinical data were collected by structured pretested questionnaire. Five ml of venous blood was drawn from each participant and analyzed by Unicel DHX 800 hematology analyzer, blood film examination, and antihuman globulin test were performed to the diagnosis of immune hemolytic anemia. Data was entered into Epidata version 4.6 and analyzed by STATA version 14. Descriptive statistics were computed and firth penalized logistic regression was used to identify predictors. P value less than 0.005 interpreted as significant. Result; - The overall prevalence of immune hemolytic anemia was 2.8 % (10 of 358 participants). Of these, 5 were males, and 7 were in the 31 to 50 year age group. Among individuals with immune hemolytic anemia, 40 % mild and 60 % moderate anemia. The factors that showed association were family history of anemia (AOR 8.30 at 95% CI 1.56, 44.12), not eating meat (AOR 7.39 at 95% CI 1.25, 45.0), and high viral load 6.94 at 95% CI (1.13, 42.6). Conclusion and recommendation; Immune hemolytic anemia is less frequent condition in human immunodeficiency virus infected adults, and moderate anemia was common in this population. The prevalence was increased with a high viral load, a family history of anemia, and not eating meat. In these patients, early detection and treatment of immune hemolytic anemia is necessary.Keywords: anemia, hemolytic, immune, auto immune, HIV/AIDS
Procedia PDF Downloads 104676 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections
Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang
Abstract:
Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling
Procedia PDF Downloads 157675 Assessing the Prevalence of Accidental Iatrogenic Paracetamol Overdose in Adult Hospital Patients Weighing <50kg: A Quality Improvement Project
Authors: Elisavet Arsenaki
Abstract:
Paracetamol overdose is associated with significant and possibly permanent consequences including hepatotoxicity, acute and chronic liver failure, and death. This quality improvement project explores the prevalence of accidental iatrogenic paracetamol overdose in hospital patients with a low body weight, defined as <50kg and assesses the impact of educational posters in trying to reduce it. The study included all adult inpatients on the admissions ward, a short stay ward for patients requiring 12-72 hour treatment, and consisted of three cycles. Each cycle consisted of 3 days of data collection in a given month (data collection for cycle 1 occurred in January 2022, February 2022 for cycle 2 and March 2022 for cycle 3). All patients given paracetamol had their prescribed dose checked against their charted weight to identify the percentage of adult inpatients <50kg who were prescribed 1g of paracetamol instead of 500mg. In the first cycle of the audit, data were collected from 83 patients who were prescribed paracetamol on the admissions ward. Subsequently, four A4 educational posters were displayed across the ward, on two separate occasions and with a one-month interval in between each poster display. The aim of this was to remind prescribing doctors of their responsibility to check patient body weight prior to prescribing paracetamol. Data were collected again one week after each round of poster display, from 72 and 70 patients respectively. Over the 3 cycles with a cumulative 225 patients, 15 weighed <50kg (6.67%) and of those, 5 were incorrectly prescribed 1g of paracetamol, yielding a 33.3% prevalence of accidental iatrogenic paracetamol overdose in adult inpatients. In cycle 1 of the project, 3 out of 6 adult patients weighing <50kg were overdosed on paracetamol, meaning that 50% of low weight patients were prescribed the wrong dose of paracetamol for their weight. In the second data collection cycle, 1 out of 5 <50kg patients were overdosed (20%) and in the third cycle, 1 out of 4 (25%). The use of educational posters resulted in a lower prevalence of accidental iatrogenic paracetamol overdose in low body weight adult inpatients. However, the differences observed were statistically insignificant (p value 0.993 and 0.995 respectively). Educational posters did not induce a significant decrease in the prevalence of accidental iatrogenic paracetamol overdose. More robust strategies need to be employed to further decrease paracetamol overdose in patients weighing <50kg.Keywords: iatrogenic, overdose, paracetamol, patient, safety
Procedia PDF Downloads 111674 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction
Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso
Abstract:
The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.Keywords: LiDAR, OBIA, remote sensing, local scale
Procedia PDF Downloads 281673 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran
Authors: S. Mani, M. Kafil, E. Asadi
Abstract:
Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality
Procedia PDF Downloads 227672 Brazilian Public Security: Governability and Constitutional Change
Authors: Gabriel Dolabella, Henrique Rangel, Stella Araújo, Carlos Bolonha, Igor de Lazari
Abstract:
Public security is a common subject on the Brazilian political agenda. The seventh largest economy in the world has high crime and insecurity rates. Specialists try to explain this social picture based on poverty, inequality or public policies addressed to drug trafficking. This excerpt approaches State measures to handle that picture. Therefore, the public security - law enforcement institutions - is at the core of this paper, particularly the relationship among federal and state law enforcement agencies, mainly ruled by a system of urgency. The problems are informal changes on law enforcement management and public opinion collaboration to these changes. Whenever there were huge international events, Brazilian armed forces occupied streets to assure law enforcement - ensuring the order. This logic, considered in the long time, could impact the federal structure of the country. The post-madisonian theorists verify that urgency is often associated to delegation of powers, which is true for Brazilian law enforcement, but here there is a different delegation: States continuously delegate law enforcement powers to the federal government throughout the use of Armed Forces. Therefore, the hypothesis is: Brazil is under a political process of federalization of public security. The political framework addressed here can be explained by the disrespect of legal constraints and the failure of rule of law theoretical models. The methodology of analysis is based on general criteria. Temporally, this study investigates events from 2003, when discussions about the disarmament statute begun. Geographically, this study is limited to Brazilian borders. Materially, the analysis result from the observation of legal resources and political resources (pronouncements of government officials). The main parameters are based on post-madisonianism and federalization of public security can be assessed through credibility and popularity that allow evaluation of this political process of constitutional change. The objective is to demonstrate how the Military Forces are used in public security, not as a random fact or an isolated political event, in order to understand the political motivations and effects that stem from that use from an institutional perspective.Keywords: public security, governability, rule of law, federalism
Procedia PDF Downloads 677671 Ultra-Wideband Antennas for Ultra-Wideband Communication and Sensing Systems
Authors: Meng Miao, Jeongwoo Han, Cam Nguyen
Abstract:
Ultra-wideband (UWB) time-domain impulse communication and radar systems use ultra-short duration pulses in the sub-nanosecond regime, instead of continuous sinusoidal waves, to transmit information. The pulse directly generates a very wide-band instantaneous signal with various duty cycles depending on specific usages. In UWB systems, the total transmitted power is spread over an extremely wide range of frequencies; the power spectral density is extremely low. This effectively results in extremely small interference to other radio signals while maintains excellent immunity to interference from these signals. UWB devices can therefore work within frequencies already allocated for other radio services, thus helping to maximize this dwindling resource. Therefore, impulse UWB technique is attractive for realizing high-data-rate, short-range communications, ground penetrating radar (GPR), and military radar with relatively low emission power levels. UWB antennas are the key element dictating the transmitted and received pulse shape and amplitude in both time and frequency domain. They should have good impulse response with minimal distortion. To facilitate integration with transmitters and receivers employing microwave integrated circuits, UWB antennas enabling direct integration are preferred. We present the development of two UWB antennas operating from 3.1 to 10.6 GHz and 0.3-6 GHz for UWB systems that provide direct integration with microwave integrated circuits. The operation of these antennas is based on the principle of wave propagation on a non-uniform transmission line. Time-domain EM simulation is conducted to optimize the antenna structures to minimize reflections occurring at the open-end transition. Calculated and measured results of these UWB antennas are presented in both frequency and time domains. The antennas have good time-domain responses. They can transmit and receive pulses effectively with minimum distortion, little ringing, and small reflection, clearly demonstrating the signal fidelity of the antennas in reproducing the waveform of UWB signals which is critical for UWB sensors and communication systems. Good performance together with seamless microwave integrated-circuit integration makes these antennas good candidates not only for UWB applications but also for integration with printed-circuit UWB transmitters and receivers.Keywords: antennas, ultra-wideband, UWB, UWB communication systems, UWB radar systems
Procedia PDF Downloads 236670 Exploring Coexisting Opportunity of Earthquake Risk and Urban Growth
Authors: Chang Hsueh-Sheng, Chen Tzu-Ling
Abstract:
Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience, and further increase vulnerability. Due to earthquakes do not kill people, buildings do. When buildings located nearby earthquake-prone areas and constructed upon poorer soil areas might result in earthquake-induced ground damage. In addition, many existing buildings built before any improved seismic provisions began to be required in building codes and inappropriate land usage with highly dense population might result in much serious earthquake disaster. Indeed, not only do earthquake disaster impact seriously on urban environment, but urban growth might increase the vulnerability. Since 1980s, ‘Cutting down risks and vulnerability’ has been brought up in both urban planning and architecture and such concept has way beyond retrofitting of seismic damages, seismic resistance, and better anti-seismic structures, and become the key action on disaster mitigation. Land use planning and zoning are two critical non-structural measures on controlling physical development while it is difficult for zoning boards and governing bodies restrict development of questionable lands to uses compatible with the hazard without credible earthquake loss projection. Therefore, identifying potential earthquake exposure, vulnerability people and places, and urban development areas might become strongly supported information for decision makers. Taiwan locates on the Pacific Ring of Fire where a seismically active zone is. Some of the active faults have been found close by densely populated and highly developed built environment in the cities. Therefore, this study attempts to base on the perspective of carrying capacity and draft out micro-zonation according to both vulnerability index and urban growth index while considering spatial variances of multi factors via geographical weighted principle components (GWPCA). The purpose in this study is to construct supported information for decision makers on revising existing zoning in high-risk areas for a more compatible use and the public on managing risks.Keywords: earthquake disaster, vulnerability, urban growth, carrying capacity, /geographical weighted principle components (GWPCA), bivariate spatial association statistic
Procedia PDF Downloads 256669 Health Monitoring of Composite Pile Construction Using Fiber Bragg Gratings Sensor Arrays
Authors: B. Atli-Veltin, A. Vosteen, D. Megan, A. Jedynska, L. K. Cheng
Abstract:
Composite materials combine the advantages of being lightweight and possessing high strength. This is in particular of interest for the development of large constructions, e.g., aircraft, space applications, wind turbines, etc. One of the shortcomings of using composite materials is the complex nature of the failure mechanisms which makes it difficult to predict the remaining lifetime. Therefore, condition and health monitoring are essential for using composite material for critical parts of a construction. Different types of sensors are used/developed to monitor composite structures. These include ultrasonic, thermography, shearography and fiber optic. The first 3 technologies are complex and mostly used for measurement in laboratory or during maintenance of the construction. Optical fiber sensor can be surface mounted or embedded in the composite construction to provide the unique advantage of in-operation measurement of mechanical strain and other parameters of interest. This is identified to be a promising technology for Structural Health Monitoring (SHM) or Prognostic Health Monitoring (PHM) of composite constructions. Among the different fiber optic sensing technologies, Fiber Bragg Grating (FBG) sensor is the most mature and widely used. FBG sensors can be realized in an array configuration with many FBGs in a single optical fiber. In the current project, different aspects of using embedded FBG for composite wind turbine monitoring are investigated. The activities are divided into two parts. Firstly, FBG embedded carbon composite laminate is subjected to tensile and bending loading to investigate the response of FBG which are placed in different orientations with respect to the fiber. Secondly, the demonstration of using FBG sensor array for temperature and strain sensing and monitoring of a 5 m long scale model of a glass fiber mono-pile is investigated. Two different FBG types are used; special in-house fibers and off-the-shelf ones. The results from the first part of the study are showing that the FBG sensors survive the conditions during the production of the laminate. The test results from the tensile and the bending experiments are indicating that the sensors successfully response to the change of strain. The measurements from the sensors will be correlated with the strain gauges that are placed on the surface of the laminates.Keywords: Fiber Bragg Gratings, embedded sensors, health monitoring, wind turbine towers
Procedia PDF Downloads 242668 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology
Authors: Ugwu O. C., Mamah R. O., Awudu W. S.
Abstract:
This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.Keywords: beamforming algorithm, adaptive beamforming, simulink, reception
Procedia PDF Downloads 40667 Applications of Space Technology in Flood Risk Mapping in Parts of Haryana State, India
Authors: B. S. Chaudhary
Abstract:
The severity and frequencies of different disasters on the globe is increasing in recent years. India is also facing the disasters in the form of drought, cyclone, earthquake, landslides, and floods. One of the major causes of disasters in northern India is flood. There are great losses and extensive damage to the agricultural crops, property, human, and animal life. This is causing environmental imbalances at places. The annual global figures for losses due to floods run into over 2 billion dollar. India is a vast country with wide variations in climate and topography. Due to widespread and heavy rainfall during the monsoon months, floods of varying magnitude occur all over the country during June to September. The magnitude depends upon the intensity of rainfall, its duration and also the ground conditions at the time of rainfall. Haryana, one of the agriculturally dominated northern states is also suffering from a number of disasters such as floods, desertification, soil erosion, land degradation etc. Earthquakes are also frequently occurring but of small magnitude so are not causing much concern and damage. Most of the damage in Haryana is due to floods. Floods in Haryana have occurred in 1978, 1988, 1993, 1995, 1998, and 2010 to mention a few. The present paper deals with the Remote Sensing and GIS applications in preparing flood risk maps in parts of Haryana State India. The satellite data of various years have been used for mapping of flood affected areas. The Flooded areas have been interpreted both visually and digitally and two classes-flooded and receded water/ wet areas have been identified for each year. These have been analyzed in GIS environment to prepare the risk maps. This shows the areas of high, moderate and low risk depending on the frequency of flood witness. The floods leave a trail of suffering in the form of unhygienic conditions due to improper sanitation, water logging, filth littered in the area, degradation of materials and unsafe drinking water making the people prone to many type diseases in short and long run. Attempts have also been made to enumerate the causes of floods. The suggestions are given for mitigating the fury of floods and proper management issues related to evacuation and safe places nearby.Keywords: flood mapping, GIS, Haryana, India, remote sensing, space technology
Procedia PDF Downloads 208666 Selection of Strategic Suppliers for Partnership: A Model with Two Stages Approach
Authors: Safak Isik, Ozalp Vayvay
Abstract:
Strategic partnerships with suppliers play a vital role for the long-term value-based supply chain. This strategic collaboration keeps still being one of the top priority of many business organizations in order to create more additional value; benefiting mainly from supplier’s specialization, capacity and innovative power, securing supply and better managing costs and quality. However, many organizations encounter difficulties in initiating, developing and managing those partnerships and many attempts result in failures. One of the reasons for such failure is the incompatibility of members of this partnership or in other words wrong supplier selection which emphasize the significance of the selection process since it is the beginning stage. An effective selection process of strategic suppliers is critical to the success of the partnership. Although there are several research studies to select the suppliers in literature, only a few of them is related to strategic supplier selection for long-term partnership. The purpose of this study is to propose a conceptual model for the selection of strategic partnership suppliers. A two-stage approach has been used in proposed model incorporating first segmentation and second selection. In the first stage; considering the fact that not all suppliers are strategically equal and instead of a long list of potential suppliers, Kraljic’s purchasing portfolio matrix can be used for segmentation. This supplier segmentation is the process of categorizing suppliers based on a defined set of criteria in order to identify types of suppliers and determine potential suppliers for strategic partnership. In the second stage, from a pool of potential suppliers defined at first phase, a comprehensive evaluation and selection can be performed to finally define strategic suppliers considering various tangible and intangible criteria. Since a long-term relationship with strategic suppliers is anticipated, criteria should consider both current and future status of the supplier. Based on an extensive literature review; strategical, operational and organizational criteria have been determined and elaborated. The result of the selection can also be used to determine suppliers who are not ready for a partnership but to be developed for strategic partnership. Since the model is based on multiple criteria for both stages, it provides a framework for further utilization of Multi-Criteria Decision Making (MCDM) techniques. The model may also be applied to a wide range of industries and involve managerial features in business organizations.Keywords: Kraljic’s matrix, purchasing portfolio, strategic supplier selection, supplier collaboration, supplier partnership, supplier segmentation
Procedia PDF Downloads 238665 Factors Associated with Seroconversion of Oral Polio Vaccine among the Children under 5 Year in District Mirpurkhas, Pakistan 2015
Authors: Muhammad Asif Syed, Mirza Amir Baig
Abstract:
Background: Pakistan is one of the two remaining polio-endemic countries, posing a significant public health challenge for global polio eradication due to failure to interrupt polio transmission. Country specific seroprevalence studies help in the evaluation of immunization program performance, the susceptibility of population against polio virus and identification of existing level of immunity with factors that affect seroconversion of the oral polio vaccine (OPV). The objective of the study was to find out factors associated with seroconversion of the OPV among children 6-59 months in Pakistan. Methods: A Hospital based cross-sectional serosurvey was undertaken in May-June 2015 at District Mirpurkhas, Sindh-Pakistan. Total 180 children aged 6–59 months were selected by using systematic random sampling from Muhammad Medical College Hospital, Mirpurkhas. Demographic, vaccination history and risk factors information were collected from the parents/guardian. Blood sample was collected and tested for the detection of poliovirus IgG antibodies by using ELISA Kit. The IgG titer <10 IU/ml, 50 to <150 IU/ml and >150 IU/ml was defined as negative, weak positive and positive immunity respectively. Pearson Chi-square test was used to determine the difference in seroprevalence in univariate analysis. Results: A total of 180 subjects were enrolled mean age was 23 months (7 -59 months). Off these 160 (89%) children were well and 18 (10%) partially protected against polio virus. Two (1.1%) children had no protection against polio virus as they had <10 IU/ml poliovirus IgG antibodies titer. Both negative cases belong from the female gender, age group 12-23 months, urban area and BMI <50 percentile. There was a difference between normal and the wasting children; it did attain statistical significance (χ2= 35.5, p=0.00). The difference in seroconversion was also observed in relation to the gender (χ2=6.23, p=0.04), duration of breast feeding (χ2=18.6, p=0.04), history of diarrheal disease before polio vaccine administration (χ2=7.7, p=0.02), and stunting (χ2= 114, p=0.00). Conclusion: This study demonstrated that near 90% children achieve seroconversion of OPV and well protected against polio virus. There is an urgent need to focus on factors like duration of breast feeding, diarrheal diseases and malnutrition (acute and chronic) among the children as an immunization strategy.Keywords: seroconversion, oral polio vaccine, Polio, Pakistan
Procedia PDF Downloads 300664 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 99663 Evaluation of Simple, Effective and Affordable Processing Methods to Reduce Phytates in the Legume Seeds Used for Feed Formulations
Authors: N. A. Masevhe, M. Nemukula, S. S. Gololo, K. G. Kgosana
Abstract:
Background and Study Significance: Legume seeds are important in agriculture as they are used for feed formulations due to their nutrient-dense, low-cost, and easy accessibility. Although they are important sources of energy, proteins, carbohydrates, vitamins, and minerals, they contain abundant quantities of anti-nutritive factors that reduce the bioavailability of nutrients, digestibility of proteins, and mineral absorption in livestock. However, the removal of these factors is too costly as it requires expensive state-of-the-art techniques such as high pressure and thermal processing. Basic Methodologies: The aim of the study was to investigate cost-effective methods that can be used to reduce the inherent phytates as putative antinutrients in the legume seeds. The seeds of Arachis hypogaea, Pisum sativum and Vigna radiata L. were subjected to the single processing methods viz raw seeds plus dehulling (R+D), soaking plus dehulling (S+D), ordinary cooking plus dehulling (C+D), infusion plus dehulling (I+D), autoclave plus dehulling (A+D), microwave plus dehulling (M+D) and five combined methods (S+I+D; S+A+D; I+M+D; S+C+D; S+M+D). All the processed seeds were dried, ground into powder, extracted, and analyzed on a microplate reader to determine the percentage of phytates per dry mass of the legume seeds. Phytic acid was used as a positive control, and one-way ANOVA was used to determine the significant differences between the means of the processing methods at a threshold of 0.05. Major Findings: The results of the processing methods showed the percentage yield ranges of 39.1-96%, 67.4-88.8%, and 70.2-93.8% for V. radiata, A. hypogaea and P. sativum, respectively. Though the raw seeds contained the highest contents of phytates that ranged between 0.508 and 0.527%, as expected, the R+D resulted in a slightly lower phytate percentage range of 0.469-0.485%, while other processing methods resulted in phytate contents that were below 0.35%. The M+D and S+M+D methods showed low phytate percentage ranges of 0.276-0.296% and 0.272-0.294%, respectively, where the lowest percentage yield was determined in S+M+D of P. sativum. Furthermore, these results were found to be significantly different (p<0.05). Though phytates cause micronutrient deficits as they chelate important minerals such as calcium, zinc, iron, and magnesium, their reduction may enhance nutrient bioavailability since they cannot be digested by the ruminants. Concluding Statement: Despite the nutritive aspects of the processed legume seeds, which are still in progress, the M+D and S+M+D methods, which significantly reduced the phytates in the investigated legume seeds, may be recommended to the local farmers and feed-producing industries so as to enhance animal health and production at an affordable cost.Keywords: anti-nutritive factors, extraction, legume seeds, phytate
Procedia PDF Downloads 26662 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka
Authors: Sakshi Dhumale, Madhushree C., Amba Shetty
Abstract:
The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability
Procedia PDF Downloads 56661 Dancing with Perfectionism and Emotional Inhibition on the Ground of Disordered Eating Behaviors: Investigating Emotion Regulation Difficulties as Mediating Factor
Authors: Merve Denizci Nazligul
Abstract:
Dancers seem to have much higher risk levels for the development of eating disorders, compared to non-dancing counterparts. In a remarkably competitive nature of dance environment, perfectionism and emotion regulation difficulties become inevitable risk factors. Moreover, early maladaptive schemas are associated with various eating disorders. In the current study, it was aimed to investigate the mediating role of difficulties with emotion regulation on the relationship between perfectionism and disordered eating behaviors, as well as on the relationship between early maladaptive schemas and disordered eating behaviors. A total of 70 volunteer dancers (n = 47 women, n = 23 men) were recruited in the study (M age = 25.91, SD = 8.9, range 19–63) from the university teams or private clubs in Turkey. The sample included various types of dancers (n = 26 ballets or ballerinas, n =32 Latin, n = 10 tango, n = 2 hiphop). The mean dancing hour per week was 11.09 (SD = 7.09) within a range of 1-30 hours. The participants filled a questionnaire set including demographic information form, Dutch Eating Behavior Questionnaire, Multidimensional Perfectionism Scale, three subscales (Emotional Inhibition, Unrelenting Standards-Hypercriticalness, Approval Seeking-Recognition Seeking) from Young Schema Questionnaire-Short Form-3 and Difficulties in Emotion Regulation Scale. The mediation hypotheses were tested using the PROCESS macro in SPSS. The findings revealed that emotion regulation difficulties significantly mediated the relationship between three distinct subtypes of perfectionism and emotional eating. The results of the Sobel test suggested that there were significant indirect effects of self-oriented perfectionism (b = .06, 95% CI = .0084, .1739), other-oriented perfectionism (b = .15, 95% CI = .0136, .4185), and socially prescribed perfectionism (b = .09, 95% CI = .0104, .2344) on emotional eating through difficulties with emotion regulation. Moreover, emotion regulation difficulties significantly mediated the relationship between emotional inhibition and emotional eating (F(1,68) = 4.67, R2 = .06, p < .05). These results seem to provide some evidence that perfectionism might become a risk factor for disordered eating behaviors when dancers are not able to regulate their emotions. Further, gaining an understanding of how inhibition of emotions leads to inverse effects on eating behavior may be important to develop intervention strategies to manage their disordered eating patterns in risk groups. The present study may also support the importance of using unified protocols for transdiagnostic approaches which focus on identifying, accepting, prompting to express maladaptive emotions and appraisals.Keywords: dancers, disordered eating, emotion regulation difficulties, perfectionism
Procedia PDF Downloads 144660 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study
Authors: Vincenzo Piemonte
Abstract:
The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling
Procedia PDF Downloads 254659 Assessment of Groundwater Quality in Karakulam Grama Panchayath in Thiruvananthapuram, Kerala State, South India
Authors: D. S. Jaya, G. P. Deepthi
Abstract:
Groundwater is vital to the livelihoods and health of the majority of the people since it provides almost the entire water resource for domestic, agricultural and industrial uses. Groundwater quality comprises the physical, chemical, and bacteriological qualities. The present investigation was carried out to determine the physicochemical and bacteriological quality of the ground water sources in the residential areas of Karakulam Grama Panchayath in Thiruvananthapuram district, Kerala state in India. Karakulam is located in the eastern suburbs of Thiruvananthapuram city. The major drinking water source of the residents in the study area are wells. The present study aims to assess the portability and irrigational suitability of groundwater in the study area. The water samples were collected from randomly selected dug wells and bore wells in the study area during post monsoon and pre-monsoon seasons of the year 2014 after a preliminary field survey. The physical, chemical and bacteriological parameters of the water samples were analysed following standard procedures. The concentration of heavy metals (Cd, Pb, and Mn) in the acid digested water samples were determined by using an Atomic Absorption Spectrophotometer. The results showed that the pH of well water samples ranged from acidic to the alkaline level. In the majority of well water samples ( > 54%) the iron and magnesium content were found high in both the seasons studied, and the values were above the permissible limits of WHO drinking water quality standards. Bacteriological analyses showed that 63% of the wells were contaminated with total coliforms in both the seasons studied. Irrigational suitability of groundwater was assessed by determining the chemical indices like Sodium Percentage (%Na), Sodium Adsorption Ratio (SAR), Residual Sodium Carbonate (RSC), Permeability Index (PI), and the results indicate that the well water in the study area is good for irrigation purposes. Therefore, the study reveals the degradation of drinking water quality groundwater sources in Karakulam Grama Panchayath in Thiruvananthapuram District, Kerala in terms of its chemical and bacteriological characteristics and is not potable without proper treatment. In the study, more than 1/3rd of the wells tested were positive for total coliforms, and the bacterial contamination may pose threats to public health. The study recommends the need for periodic well water quality monitoring in the study area and to conduct awareness programs among the residents.Keywords: bacteriological, groundwater, irrigational suitability, physicochemical, portability
Procedia PDF Downloads 263658 Sweden’s SARS-CoV-2 Mitigation Failure as a Science and Solutions Principle Case Study
Authors: Dany I. Doughan, Nizam S. Najd
Abstract:
Different governments in today’s global pandemic are approaching the challenging and complex issue of mitigating the spread of the SARS-CoV-2 virus differently while simultaneously considering their national economic and operational bottom lines. One of the most notable successes has been Taiwan's multifaceted virus containment approach, which resulted in a substantially lower incidence rate compared to Sweden’s chief mitigation tactic of herd immunity. From a classic Swiss Cheese Model perspective, integrating more fail-safe layers of defense against the virus in Taiwan’s approach compared to Sweden’s meant that in Taiwan, the government did not have to resort to extreme measures like the national lockdown Sweden is currently contemplating. From an optimized virus spread mitigation solution development standpoint using the Solutions Principle, the Taiwanese and Swedish solutions were desirable economically by businesses that remained open and non-economically or socially by individuals who enjoyed fewer disruptions from what they considered normal before the pandemic. Out of the two, the Taiwanese approach was more feasible long-term from a workforce management and quality control perspective for healthcare facilities and their professionals who were able to provide better, longer, more attentive care to the fewer new positive COVID-19 cases. Furthermore, the Taiwanese approach was more applicable as an overall model to emulate thanks in part to its short-term and long-term multilayered approach, which allows for the kind of flexibility needed by other governments to fully or partially adapt or adopt said, model. The Swedish approach, on the other hand, ignored the biochemical nature of the virus and relied heavily on short-term personal behavioral adjustments and conduct modifications, which are not as reliable as establishing required societal norms and awareness programs. The available international data on COVID-19 cases and the published governmental approaches to control the spread of the coronavirus support a better fit into the Solutions Principle of Taiwan’s Swiss Cheese Model success story compared to Sweden’s.Keywords: coronavirus containment and mitigation, solutions principle, Swiss Cheese Model, viral mutation
Procedia PDF Downloads 133657 The Problem of Suffering: Job, The Servant and Prophet of God
Authors: Barbara Pemberton
Abstract:
Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.Keywords: suffering, Job, Qur'an, tanakh
Procedia PDF Downloads 185656 Facilitating Career Development of Women in Science, Technology, Engineering, Mathematics and Medicine: Towards Increasing Understanding, Participation, Progression and Retention through an Intersectionality Perspective
Authors: Maria Tsouroufli, Andrea Mondokova, Subashini Suresh
Abstract:
Background: The under-representation of women and consequent failure to fulfil their potential contribution to Science, Technology, Engineering, Maths, and Medicine (STEMM) subjects in the UK is an issue that the Higher Education sector is being encouraged to address. Focus: The aim of this research is to investigate the barriers, facilitators, and incentives that influence diverse groups of women who have embarked upon a related career in STEMM subjects. The project will address a number of interconnected research questions: 1. How do participants perceive the barriers, facilitators and incentives for women in terms of research, teaching and management/leadership at each stage of their development towards forging a career in STEMM? 2. How might gender intersect with ethnicity, pregnancy/maternity and academic grade in the career experiences of women in STEMM? 3. How do participants perceive the example of female role models in emulating them as a career model? 4. How do successful females in STEMM see themselves as role models and what strategies do they employ to promote their careers? 5. How does institutional culture manifest itself as a barrier or facilitator for women in STEMM subjects in the institution? Methodology and Theoretical framework: A mixed-methodology will be employed in a case study of one university. The study will draw on extant quantitative data for context and involve conducting a qualitative inquiry to discover the perceptions of staff and students around the key concepts under study (career progression, sense of belonging and tenure, role-models, personal satisfaction, perceived gender in/equality, institutional culture). The analysis will be informed by an intersectionality framework, feminist and gender theory, and organisational psychology and human resource management perspectives. Implications: Preliminary findings will be collected in 2017. Conclusions will be drawn and used to inform recruitment and retention, and the development and implementation of initiatives to enhance the experiences and outcomes of women working and studying in STEMM subjects in Higher Education.Keywords: under-representation, women, STEMM subjects, intersectionality
Procedia PDF Downloads 284655 Success of Trabeculectomy: May Not Always Depend on Mitomycin C
Authors: Sushma Tejwani, Shoruba Dinakaran, Rupa Rokhade, K. Bhujang Shetty
Abstract:
Introduction and aim: One of the major causes for failure of trabeculectomy is fibrosis and scarring of subconjunctival tissue around the bleb, and hence intra operative usage of anti-fibrotic agents like Mitomycin C (MMC) has become very popular. However, the long term effects of MMC like thin, avascular bleb, hypotony, bleb leaks and late onset endophthalmitis cannot be ignored, and may preclude its usage in routine trabeculectomy. In this particular study we aim to study the outcomes of trabeculectomy with and without MMC in uncomplicated glaucoma patients. Methods: Retrospective study of series of patients that underwent trabeculectomy with or without cataract surgery in glaucoma department of a tertiary eye care centre by a single surgeon for primary open angle glaucoma (POAG), angle closure glaucoma (PACG), Pseudoexfoliation glaucoma (PXF glaucoma). Patients with secondary glaucoma, juvenile and congenital glaucoma were excluded; also patients undergoing second trabeculectomy were excluded. The outcomes were studied in terms of IOP control at 1 month, 6 months, and 1 year and were analyzed separately for surgical outcomes with and without MMC. Success was considered if IOP was < 16 mmHg on applanation tonometry. Further, the necessity of medication, 5 fluorouracil (5FU) postoperative injections, needling post operatively was noted. Results: Eighty nine patient’s medical records were reviewed, of which 58 patients had undergone trabeculectomy without MMC and 31 with MMC. Mean age was 62.4 (95%CI 61- 64), 34 were females and 55 males. MMC group (n=31): Preoperative mean IOP was 21.1mmHg (95% CI: 17.6 -24.6), and 22 patients had IOP > 16. Three out of 33 patients were on single medication and rests were on multiple drugs. At 1 month (n=27) mean IOP was 12.4 mmHg (CI: 10.7-14), and 31/33 had success. At 6 months (n=18) mean IOP was 13mmHg (CI: 10.3-14.6) and 16/18 had good outcome, however at 1 year only 11 patients were available for follow up and 91% (10/11) had success. Overall, 3 patients required medication and one patient required postoperative injection of 5 FU. No MMC group (n=58): Preoperative mean IOP was 21.9 mmHg (CI: 19.8-24.2), and 42 had IOP > 16 mmHg. 12 out of 58 patients were on single medication and rests were on multiple drugs. At 1 month (n=52) mean IOP was14.6mmHg (CI: 13.2-15.9), and 45/ 58 had IOP < 16mmHg. At 6 months (n=31) mean IOP was 13.5 mmHg (CI: 11.9-15.2) and 26/31 had success, however at 1 year only 23 patients came for follow up and of these 87% (20/23) patients had success. Overall, 1 patient required needling, 5 required 5 FU injections and 5 patients required medication. The success rates at each follow up visit were not significantly different in both the groups. Conclusion: Intra-operative MMC usage may not be required in all patients undergoing trabeculectomy, and the ones without MMC also have fairly good outcomes in primary glaucoma.Keywords: glaucoma filtration surgery, mitomycin C, outcomes of trabeculectomy, wound modulation
Procedia PDF Downloads 273654 Sustainable Concepts Applied in the Pre-Columbian Andean Architecture in Southern Ecuador
Authors: Diego Espinoza-Piedra, David Duran
Abstract:
All architectural and land use processes are framed in a cultural, social and geographical context. The present study analyzes the Andean culture before the Spanish conquest in southern Ecuador, in the province of Azuay. This area has been habited for more than 10.000 years. The Canari and the Inca cultures occupied Azuay close to the arrival of the Spanish conquers. The Inca culture was settled in the Andes Mountains. The Canari culture was established in the south of Ecuador, on the actual provinces of Azuay and Canar. In contrast with history and archeology, to the best of our knowledge, their architecture has not yet been studied in this area because of the lack of architectural structures. Consequently, the present research reviewed the land use and culture for architectonic interpretations. The two main architectural objects in these cultures were dwellings and public buildings. In the first case, housing was conceived as temporary. It had to stand as long as its inhabitants lived. Therefore, houses were built when a couple got married. The whole community started the construction through the so-called ‘minga’ or collective work. The construction materials were tree branches, reeds, agave, ground, and straw. So that when their owners aged and then died, this house was easily disarmed and overthrown. Their materials become part of the land for agriculture. Finally, this cycle was repeated indefinitely. In the second case, the buildings, which we can call public, have presented erroneous interpretations. They have been defined as temples. But according to our conclusions, they were places for temporary accommodation, storage of objects and products, and in some special cases, even astronomical observatories. These public buildings were settled along the important road system called ‘Capac-Nam’, currently declared by UNESCO as World Cultural Heritage. The buildings had different scales at regular distances. Also, they were established in special or strategic places, which constituted a system of observatories. These observatories allowed to determine the cycles or calendars (solar or lunar) necessary for the agricultural production, as well as other natural phenomena. Most of the current minimal existence of physical structures in quantity and state of conservation is at the level of foundations or pieces of walls. Therefore, this study was realized after the identification of the history and culture of the inhabitants of this Andean region.Keywords: Andean, pre-Colombian architecture, Southern Ecuador, sustainable
Procedia PDF Downloads 127653 Ecosystem, Environment Being Threatened by the Activities of Major Industries
Authors: Charles Akinola Imolehin
Abstract:
According to the news on world population record, over 6.6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future. To say the fact, human civilization is now the dominant cause of change in the global environment. Now that human relationship to the earth has change so utterly, there is need to see to that change and understand its implication. These are two aspects to the challenges which all should believe. The first is to realize that human activity has power to harm the earth and can indeed have global and even permanent effects. Second is to realize that the only way to understand human new role as a co-architect of nature is to see human activities as part of a complex system that does operate according to the same simple rules of cause and effect commonly used to. So, understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because believing in Sustainable Development is a matter of reconciling respect for the environment, social equity, and economic profitability. Also, there is strong believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why is important to always have it at the heart of business policy that the environmental problem is not our effect on the environment so much as the relationship of production activities on the environment. There should be this positive thinking in all operation to always be environmentally friendly especially in projection and considering Sustainable ALL awareness in all sites of operation.Keywords: earth's ocean, marine animals life under treat, flooding, ctritical natiural resouces polluted
Procedia PDF Downloads 17652 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India
Authors: Richa Raje, Anumol Antony
Abstract:
Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.Keywords: children, landscape design, learning environment, nature and play, outdoor play
Procedia PDF Downloads 123651 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 123650 Strategies for Enhancing Academic Honesty as an Ethical Concern in Electronic Learning (E-learning) among University Students: A Philosophical Perspective
Authors: Ekeh Greg
Abstract:
Learning has been part of human existence from time immemorial. The aim of every learning is to know the truth. In education, it is desirable that true knowledge is imparted and imbibed. For this to be achieved, there is need for honesty, in this context, academic honesty among students, especially in e-learning. This is an ethical issue since honesty bothers on human conduct. However, research findings have shown that academic honesty has remained a big challenge to online learners, especially among the university students. This is worrisome since the university education is the final education system and a gateway to life in the wider society after schooling. If they are practicing honesty in their academic life, it is likely that they will practice honesty in the in the society, thereby bringing positive contributions to the society wherever they find themselves. With this in mind, the significance of this study becomes obvious. On grounds of this significance, this paper focuses on strategies that are adjudged certain to enhance the practice of honesty in e-learning so as to enable learners to be well equipped to contribute to the society through honest ways. The aim of the paper is to contribute to the efforts of instilling the consciousness and practice of honesty in the minds and hearts of learners. This will, in turn, promote effective teaching and learning, academic high standard, competence and self-confidence in university education. Philosophical methods of conceptual analysis, clarification, description and prescription are adopted for the study. Philosophical perspective is chosen so as to ground the paper on the basis of rationality rather than emotional sentiments and biases emanating from cultural, religious and ethnic differences and orientations. Such sentiments and biases can becloud objective reasoning and sound judgment. A review of related literature is also carried out. The findings show that academic honesty in e-learning is a cherished value, but it is bedeviled by some challenges, such as care-free attitude on the part of students and absence of monitoring. The findings also show that despite the challenges facing academic honesty, strategies such as self-discipline, determination, hard work, imbibing ethical and philosophical principles, among others, can certainly enhance the practice of honesty in e-learning among university students. The paper, therefore, concludes that these constitute strategies for enhancing academic honesty among students. Consequently, it is suggested that instructors, school counsellors and other stakeholders should endeavour to see that students are helped to imbibe these strategies and put them into practice. Students themselves are enjoined to cherish honesty in their academic pursuit and avoid short-cuts. Short-cuts can only lead to mediocrity and incompetence on the part of the learners, which may have long adverse consequences, both on themselves and others.Keywords: academic, ethical, philosophical, strategies
Procedia PDF Downloads 75