Search results for: literature review software
1121 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability
Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris
Abstract:
Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector
Procedia PDF Downloads 2261120 Safer Staff: A Survey of Staff Experiences of Violence and Aggression at Work in Coventry and Warwickshire Partnership National Health Service Trust
Authors: Rupinder Kaler, Faith Ndebele, Nadia Saleem, Hafsa Sheikh
Abstract:
Background: Workplace related violence and aggression seems to be considered an acceptable occupational hazard for staff in mental health services. There is literature evidence that healthcare workers in mental health settings are at higher risk from aggression from patients. Aggressive behaviours pose a physical and psychological threat to the psychiatric staff and can result in stress, burnout, sickness, and exhaustion. Further evidence informs that health professionals are the most exposed to psychological disorders such as anxiety, depression and post-traumatic stress disorder. Fear that results from working in a dangerous environment and exhaustion can have a damaging impact on patient care and healthcare relationship. Aim: The aim of this study is to investigate the prevalence and impact of aggressive behaviour on staff working at Coventry and Warwickshire Partnership Trust. Methodology: The study methodology included carrying out a manual, anonymised, multi-disciplinary cross-sectional survey questionnaire across all clinical and non-clinical staff at CWPT from both inpatient and community settings. Findings: The unsurprising finding was that of higher prevalence of aggressive behaviours in in-patients in comparison to community staff. Conclusion: There is a high rate of verbal and physical aggression at work and this has a negative impact on the staff emotional and physical well- being. There is also a higher reliance on colleagues for support on an informal basis than formal organisational support systems. Recommendations: A workforce that is well and functioning is the biggest resource for an organisation. Staff safety during working hours is everyone's responsibility and sits with both individual staff members and the organisation. Post-incident organisational support needs to be consolidated, and hands-on, timely support offered to help maintain emotionally well staff on CWPT. The authors recommend development of preventative and practical protocols for aggression with patient and carer involvement. Post-incident organisational support needs to be consolidated, and hands-on, timely support offered to help maintain emotionally well staff on CWPT.Keywords: safer staff, survey of staff experiences, violence and aggression, mental health
Procedia PDF Downloads 2041119 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation
Authors: Calorine Twebaze, Jesca Balinga
Abstract:
Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches
Procedia PDF Downloads 591118 Adaptation of Retrofit Strategies for the Housing Sector in Northern Cyprus
Authors: B. Ozarisoy, E. Ampatzi, G. Z. Lancaster
Abstract:
This research project is undertaken in the Turkish Republic of Northern Cyprus (T.R.N.C). The study focuses on identifying refurbishment activities capable of diagnosing and detecting the underlying problems alongside the challenges offered by the buildings’ typology in addition to identifying the correct construction materials in the refurbishment process which allow for the maximisation of expected energy savings. Attention is drawn to, the level of awareness and understanding of refurbishment activity that needs to be raised in the current construction process alongside factors that include the positive environmental impact and the saving of energy. The approach here is to look at buildings that have been built by private construction companies that have already been refurbished by occupants and to suggest additional control mechanisms for retrofitting that can further enhance the process of renewal. The objective of the research is to investigate the occupants’ behaviour and role in the refurbishment activity; to explore how and why occupants decide to change building components and to understand why and how occupants consider using energy-efficient materials. The present work is based on data from this researcher’s first-hand experience and incorporates the preliminary data collection on recent housing sector statistics, including the year in which housing estates were built, an examination of the characteristics that define the construction industry in the T.R.N.C., building typology and the demographic structure of house owners. The housing estates are chosen from 16 different projects in four different regions of the T.R.N.C. that include urban and suburban areas. There is, therefore, a broad representation of the common drivers in the property market, each with different levels of refurbishment activity and this is coupled with different samplings from different climatic regions within the T.R.N.C. The study is conducted through semi-structured interviews to identify occupants’ behaviour as it is associated with refurbishment activity. The interviews provide all the occupants’ demographic information, needs and intentions as they relate to various aspects of the refurbishment process. This research paper presents the results of semi-structured interviews with 70 homeowners in a selected group of 16 housing estates in five different parts of the T.R.N.C. The people who agreed to be interviewed in this study are all residents of single or multi-family housing units. Alongside the construction process and its impact on the environment, the results point out the need for control mechanisms in the housing sector to promote and support the adoption of retrofit strategies and minimize non-controlled refurbishment activities, in line with diagnostic information of the selected buildings. The expected solutions should be effective, environmentally acceptable and feasible given the type of housing projects under review, with due regard for their location, the climatic conditions within which they were undertaken, the socio-economic standing of the house owners and their attitudes, local resources and legislative constraints. Furthermore, the study goes on to insist on the practical and long-term economic benefits of refurbishment under the proper conditions and why this should be fully understood by the householders.Keywords: construction process, energy-efficiency, refurbishment activity, retrofitting
Procedia PDF Downloads 3261117 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation
Authors: Jia-Chao Wang
Abstract:
The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon
Procedia PDF Downloads 1201116 The Implementation of a Nurse-Driven Palliative Care Trigger Tool
Authors: Sawyer Spurry
Abstract:
Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).Keywords: palliative care, nursing, quality improvement, trigger tool
Procedia PDF Downloads 1971115 Incentive-Based Motivation to Network with Coworkers: Strengthening Professional Networks via Online Social Networks
Authors: Jung Lee
Abstract:
The last decade has witnessed more people than ever before using social media and broadening their social circles. Social media users connect not only with their friends but also with professional acquaintances, primarily coworkers, and clients; personal and professional social circles are mixed within the same social media platform. Considering the positive aspect of social media in facilitating communication and mutual understanding between individuals, we infer that social media interactions with co-workers could indeed benefit one’s professional life. However, given privacy issues, sharing all personal details with one’s co-workers is not necessarily the best practice. Should one connect with coworkers via social media? Will social media connections with coworkers eventually benefit one’s long-term career? Will the benefit differ across cultures? To answer, this study examines how social media can contribute to organizational communication by tracing the foundation of user motivation based on social capital theory, leader-member exchange (LMX) theory and expectancy theory of motivation. Although social media was originally designed for personal communication, users have shown intentions to extend social media use for professional communication, especially when the proper incentive is expected. To articulate the user motivation and the mechanism of the incentive expectation scheme, this study applies those three theories and identify six antecedents and three moderators of social media use motivation including social network flaunt, shared interest, perceived social inclusion. It also hypothesizes that the moderating effects of those constructs would significantly differ based on the relationship hierarchy among the workers. To validate, this study conducted a survey of 329 active social media users with acceptable levels of job experiences. The analysis result confirms the specific roles of the three moderators in social media adoption for organizational communication. The present study contributes to the literature by developing a theoretical modeling of ambivalent employee perceptions about establishing social media connections with co-workers. This framework shows not only how both positive and negative expectations of social media connections with co-workers are formed based on expectancy theory of motivation, but also how such expectations lead to behavioral intentions using career success model. It also enhances understanding of how various relationships among employees can be influenced through social media use and such usage can potentially affect both performance and careers. Finally, it shows how cultural factors induced by social media use can influence relations among the coworkers.Keywords: the social network, workplace, social capital, motivation
Procedia PDF Downloads 1241114 Administrative Supervision of Local Authorities’ Activities in Selected European Countries
Authors: Alina Murtishcheva
Abstract:
The development of an effective system of administrative supervision is a prerequisite for the functioning of local self-government on the basis of the rule of law. Administrative supervision of local self-government is of particular importance in the EU countries due to the influence of integration processes. The central authorities act on the international level; however, subnational authorities also have to implement European legislation in order to strengthen integration. Therefore, the central authority, being the connecting link between supranational and subnational authorities, should bear responsibility, including financial responsibility, for possible mistakes of subnational authorities. Consequently, the state should have sufficient mechanisms of control over local and regional authorities in order to correct their mistakes. At the same time, the control mechanisms do not deny the autonomy of local self-government. The paper analyses models of administrative supervision of local self-government in Ukraine, Poland, Lithuania, Belgium, Great Britain, Italy, and France. The research methods used in this paper are theoretical methods of analysis of scientific literature, constitutions, legal acts, Congress of Local and Regional Authorities of the Council of Europe reports, and constitutional court decisions, as well as comparative and logical analysis. The legislative basis of administrative supervision was scrutinized, and the models of administrative supervision were classified, including a priori control and ex-post control or their combination. The advantages and disadvantages of these models of administrative supervision are analysed. Compliance with Article 8 of the European Charter of Local Self-Government is of great importance for countries achieving common goals and sharing common values. However, countries under study have problems and, in some cases, demonstrate non-compliance with provisions of Article 8. Such non-conformity as the endorsement of a mayor by the Flemish Government in Belgium, supervision with a view to expediency in Great Britain, and the tendency to overuse supervisory power in Poland are analysed. On the basis of research, the tendencies of administrative supervision of local authorities’ activities in selected European countries are described. Several recommendations for Ukraine as a country that had been granted EU candidate status are formulated. Having emphasised its willingness to become a member of the European community, Ukraine should not only follow the best European practices but also avoid the mistakes of countries that have long-term experience in developing the local self-government institution. This project has received funding from the Research Council of Lithuania (LMTLT), agreement № S-PD-22-65.Keywords: administrative supervision, decentralisation, legality, local authorities, local self-government
Procedia PDF Downloads 641113 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context
Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan
Abstract:
Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management
Procedia PDF Downloads 1791112 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 1171111 Evaluating Textbooks for Brazilian Air Traffic Controllers’ English Language Training: A Checklist Proposal
Authors: Elida M. R. Bonifacio
Abstract:
English language proficiency has become an essential issue in aviation communication after aviation incidents, and accidents happened. Lack of proficiency or inappropriate use of the English language has been found as one of the factors that cause most of those incidents or accidents. Therefore, the International Civil Aviation Organization (ICAO) established the requirements for minimum English language proficiency of aviation personnel, especially pilots and air traffic controllers in the 192 member states. In Brazil, the discussions about this topic became patent after an accident that occurred in 2006, which was a mid-air collision and costed the life of 154 passengers and crew members. Thus, the number of schools and private practitioners willing to teach English for aviation purposes started to increase. Although the number of teaching materials internationally used for general purposes is relatively large, it would be inappropriate to adopt the same materials in classes that focus on communication in aviation contexts. On the contrary, the options of aviation English materials are scarce; moreover, they are internationally used and may not fulfill the linguistic needs of all their users around the world. In order to diminish the problems that Brazilian practitioners may encounter in the adoption of materials that demand a great level of adaptation to meet their students’ needs, a checklist was thought to evaluate textbooks. The aim of this paper is to propose a checklist that evaluates textbooks used in English language training of Brazilian air traffic controllers. The criteria used to compound the checklist are based on materials development literature, as well as on linguistic requirements established by ICAO on its publications, on English for Specific Purposes (ESP) principles, and on Brazilian aviation English language proficiency test format. The checklist has as main indicators the language learning tenets under which the book was written, graphical features, lexical, grammatical and functional competencies required for minimum proficiency, similarities to official testing format, and support materials, totaling 117 items marked as YES, NO or PARTIALLY. In order to verify if the use of the checklist is effective, an aviation English textbook was evaluated. From this evaluation, it is possible to measure quantitatively how much the material meets the students’ needs and to offer a tool to help professionals engaged in aviation English teaching around the world to choose the most appropriate textbook according to their audience. From the results, practitioners are able to verify which items the material does not fulfill and to make proper adaptations since the perfect material will be difficult to find.Keywords: aviation English, ICAO, materials development, English language proficiency
Procedia PDF Downloads 1361110 Simultaneous Detection of Cd⁺², Fe⁺², Co⁺², and Pb⁺² Heavy Metal Ions by Stripping Voltammetry Using Polyvinyl Chloride Modified Glassy Carbon Electrode
Authors: Sai Snehitha Yadavalli, K. Sruthi, Swati Ghosh Acharyya
Abstract:
Heavy metal ions are toxic to humans and all living species when exposed in large quantities or for long durations. Though Fe acts as a nutrient, when intake is in large quantities, it becomes toxic. These toxic heavy metal ions, when consumed through water, will cause many disorders and are harmful to all flora and fauna through biomagnification. Specifically, humans are prone to innumerable diseases ranging from skin to gastrointestinal, neurological, etc. In higher quantities, they even cause cancer in humans. Detection of these toxic heavy metal ions in water is thus important. Traditionally, the detection of heavy metal ions in water has been done by techniques like Inductively Coupled Plasma Mass Spectroscopy (ICPMS) and Atomic Absorption Spectroscopy (AAS). Though these methods offer accurate quantitative analysis, they require expensive equipment and cannot be used for on-site measurements. Anodic Stripping Voltammetry is a good alternative as the equipment is affordable, and measurements can be made at the river basins or lakes. In the current study, Square Wave Anodic Stripping Voltammetry (SWASV) was used to detect the heavy metal ions in water. Literature reports various electrodes on which deposition of heavy metal ions was carried out like Bismuth, Polymers, etc. The working electrode used in this study is a polyvinyl chloride (PVC) modified glassy carbon electrode (GCE). Ag/AgCl reference electrode and Platinum counter electrode were used. Biologic Potentiostat SP 300 was used for conducting the experiments. Through this work of simultaneous detection, four heavy metal ions were successfully detected at a time. The influence of modifying GCE with PVC was studied in comparison with unmodified GCE. The simultaneous detection of Cd⁺², Fe⁺², Co⁺², Pb⁺² heavy metal ions was done using PVC modified GCE by drop casting 1 wt.% of PVC dissolved in Tetra Hydro Furan (THF) solvent onto GCE. The concentration of all heavy metal ions was 0.2 mg/L, as shown in the figure. The scan rate was 0.1 V/s. Detection parameters like pH, scan rate, temperature, time of deposition, etc., were optimized. It was clearly understood that PVC helped in increasing the sensitivity and selectivity of detection as the current values are higher for PVC-modified GCE compared to unmodified GCE. The peaks were well defined when PVC-modified GCE was used.Keywords: cadmium, cobalt, electrochemical sensing, glassy carbon electrodes, heavy metal Ions, Iron, lead, polyvinyl chloride, potentiostat, square wave anodic stripping voltammetry
Procedia PDF Downloads 1041109 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model
Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge
Abstract:
Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model
Procedia PDF Downloads 1311108 Аnalysis of the Perception of Medical Professionalism by Specialists of Family Medicine in Kazakhstan
Authors: Nurgul A. Abenova, Gaukhar S. Dilmagambetova, Lazzat M. Zhamaliyeva
Abstract:
Professionalism is a core competency that all medical students must achieve throughout their studies. Clinical knowledge, good communication skills and an understanding of ethics form the basis of professionalism. Patients, medical societies and accrediting organizations expect future specialists to be professionals in their field, which in turn leads to the best clinical results. Currently, there are no studies devoted to the study of medical professionalism in the Republic of Kazakhstan. As a result, medical education in the Kazakhstani system has a limited perception of the concept of professionalism compared to many Western medical schools. Thus, the primary purpose of this study is to analyze the perception of medical professionalism among residents and teachers of family medicine at the West Kazakhstan Marat Ospanov Medical University. А qualitative research method was used based on the content analysis methodology. A focus group discussion was held with 60 residents and 12 family medicine teachers to gather participants' views and experiences in the field of medical professionalism. The received information was processed using the MAXQDA-2020 software package. Respondents were selected for the study based on their age, gender, and educational level. The results of the conducted survey confirmed the respondents’ acknowledgment of the basic attributes of professionalism, such as medical knowledge and skills (more than 40% of the answers), personal and moral qualities of the doctor (more than 25% of the answers), respect for the interests of the patient (15% of the answers), the relationship between the doctor and the patient and among professionals themselves (15% of responses). Another important discovery of the survey was that residents are five times more likely to define the relationship between a doctor and a patient in a model “respect for the interests of the patient” in comparison with teachers of family medicine, who primarily reported responsibility and collegiality to be the basis for the development of professionalism and traditionally view doctor-patient relationship to be formed on the basis of paternalism defined by a high degree of control over patients. This significant difference demonstrates a rift among specialists in the field of family medicine, which causes a lot of problems. For example, nowadays, professional family doctors regularly face burnout problem due to many reasons and factors that force them to abandon their jobs. In addition to that, elements of professionalism such as reflective skills, time management and feedback collection were presented to the least extent (less than 1%) by both groups, which differs from the perception of the Western medical school and is a significant issue that needs to be solved. The qualitative nature of our study provides a detailed understanding of medical professionalism in the context of the Central Asian healthcare system, revealing many aspects that are inferior to the Western medical school counterparts and provides a solution, which is to teach the attributes and skills required for medical professionalism at all stages of medical education of family doctors.Keywords: family medicine, family doctors, medical professionalism, medical education
Procedia PDF Downloads 1411107 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 1621106 Prioritizing Temporary Shelter Areas for Disaster Affected People Using Hybrid Decision Support Model
Authors: Ashish Trivedi, Amol Singh
Abstract:
In the recent years, the magnitude and frequency of disasters have increased at an alarming rate. Every year, more than 400 natural disasters affect global population. A large-scale disaster leads to destruction or damage to houses, thereby rendering a notable number of residents homeless. Since humanitarian response and recovery process takes considerable time, temporary establishments are arranged in order to provide shelter to affected population. These shelter areas are vital for an effective humanitarian relief; therefore, they must be strategically planned. Choosing the locations of temporary shelter areas for accommodating homeless people is critical to the quality of humanitarian assistance provided after a large-scale emergency. There has been extensive research on the facility location problem both in theory and in application. In order to deliver sufficient relief aid within a relatively short timeframe, humanitarian relief organisations pre-position warehouses at strategic locations. However, such approaches have received limited attention from the perspective of providing shelters to disaster-affected people. In present research work, this aspect of humanitarian logistics is considered. The present work proposes a hybrid decision support model to determine relative preference of potential shelter locations by assessing them based on key subjective criteria. Initially, the factors that are kept in mind while locating potential areas for establishing temporary shelters are identified by reviewing extant literature and through consultation from a panel of disaster management experts. In order to determine relative importance of individual criteria by taking into account subjectivity of judgements, a hybrid approach of fuzzy sets and Analytic Hierarchy Process (AHP) was adopted. Further, Technique for order preference by similarity to ideal solution (TOPSIS) was applied on an illustrative data set to evaluate potential locations for establishing temporary shelter areas for homeless people in a disaster scenario. The contribution of this work is to propose a range of possible shelter locations for a humanitarian relief organization, using a robust multi criteria decision support framework.Keywords: AHP, disaster preparedness, fuzzy set theory, humanitarian logistics, TOPSIS, temporary shelters
Procedia PDF Downloads 2051105 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy
Authors: Ye Zhang, Chuanjun Liu
Abstract:
In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.Keywords: composite structure, damage tolerance, impact threat, probabilistic
Procedia PDF Downloads 3081104 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study
Authors: Anna Ching-Yu Wong
Abstract:
Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.Keywords: textbook savings, open textbooks, textbook costs assessment, open access
Procedia PDF Downloads 751103 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 1731102 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 301101 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 891100 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations
Authors: Madan Chandra Maurya, A. R. Dar
Abstract:
Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS
Procedia PDF Downloads 2301099 Leadership Lessons from Female Executives in the South African Oil Industry
Authors: Anthea Carol Nefdt
Abstract:
In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.Keywords: petroleum industry, gender, feminism, leadership
Procedia PDF Downloads 1661098 Experimental Field for the Study of Soil-Atmosphere Interaction in Soft Soils
Authors: Andres Mejia-Ortiz, Catalina Lozada, German R. Santos, Rafael Angulo-Jaramillo, Bernardo Caicedo
Abstract:
The interaction between atmospheric variables and soil properties is a determining factor when evaluating the flow of water through the soil. This interaction situation directly determines the behavior of the soil and greatly influences the changes that occur in it. The atmospheric variations such as changes in the relative humidity, air temperature, wind velocity and precipitation, are the external variables that reflect a greater incidence in the changes that are generated in the subsoil, as a consequence of the water flow in descending and ascending conditions. These environmental variations have a major importance in the study of the soil because the conditions of humidity and temperature in the soil surface depend on them. In addition, these variations control the thickness of the unsaturated zone and the position of the water table with respect to the surface. However, understanding the relationship between the atmosphere and the soil is a somewhat complex aspect. This is mainly due to the difficulty involved in estimating the changes that occur in the soil from climate changes; since this is a coupled process where act processes of mass transfer and heat. In this research, an experimental field was implemented to study in-situ the interaction between the atmosphere and the soft soils of the city of Bogota, Colombia. The soil under study consists of a 60 cm layer composed of two silts of similar characteristics at the surface and a deep soft clay deposit located under the silky material. It should be noted that the vegetal layer and organic matter were removed to avoid the evapotranspiration phenomenon. Instrumentation was carried on in situ through a field disposal of many measuring devices such as soil moisture sensors, thermocouples, relative humidity sensors, wind velocity sensor, among others; which allow registering the variations of both the atmospheric variables and the properties of the soil. With the information collected through field monitoring, the water balances were made using the Hydrus-1D software to determine the flow conditions that developed in the soil during the study. Also, the moisture profile for different periods and time intervals was determined by the balance supplied by Hydrus 1D; this profile was validated by experimental measurements. As a boundary condition, the actual evaporation rate was included using the semi-empirical equations proposed by different authors. In this study, it was obtained for the rainy periods a descending flow that was governed by the infiltration capacity of the soil. On the other hand, during dry periods. An increase in the actual evaporation of the soil induces an upward flow of water, increasing suction due to the decrease in moisture content. Also, cracks were developed accelerating the evaporation process. This work concerns to the study of soil-atmosphere interaction through the experimental field and it is a very useful tool since it allows considering all the factors and parameters of the soil in its natural state and real values of the different environmental conditions.Keywords: field monitoring, soil-atmosphere, soft soils, soil-water balance
Procedia PDF Downloads 1371097 Analysis of Urban Flooding in Wazirabad Catchment of Kabul City with Help of Geo-SWMM
Authors: Fazli Rahim Shinwari, Ulrich Dittmer
Abstract:
Like many megacities around the world, Kabul is facing severe problems due to the rising frequency of urban flooding. Since 2001, Kabul is experiencing rapid population growth because of the repatriation of refugees and internal migration. Due to unplanned development, green areas inside city and hilly areas within and around the city are converted into new housing towns that had increased runoff. Trenches along the roadside comprise the unplanned drainage network of the city that drains the combined sewer flow. In rainy season overflow occurs, and after streets become dry, the dust particles contaminate the air which is a major cause of air pollution in Kabul city. In this study, a stormwater management model is introduced as a basis for a systematic approach to urban drainage planning in Kabul. For this purpose, Kabul city is delineated into 8 watersheds with the help of one-meter resolution LIDAR DEM. Storm, water management model, is developed for Wazirabad catchment by using available data and literature values. Due to lack of long term metrological data, the model is only run for hourly rainfall data of a rain event that occurred in April 2016. The rain event from 1st to 3rd April with maximum intensity of 3mm/hr caused huge flooding in Wazirabad Catchment of Kabul City. Model-estimated flooding at some points of the catchment as an actual measurement of flooding was not possible; results were compared with information obtained from local people, Kabul Municipality and Capital Region Independent Development Authority. The model helped to identify areas where flooding occurred because of less capacity of drainage system and areas where the main reason for flooding is due to blockage in the drainage canals. The model was used for further analysis to find a sustainable solution to the problem. The option to construct new canals was analyzed, and two new canals were proposed that will reduce the flooding frequency in Wazirabad catchment of Kabul city. By developing the methodology to develop a stormwater management model from digital data and information, the study had fulfilled the primary objective, and similar methodology can be used for other catchments of Kabul city to prepare an emergency and long-term plan for drainage system of Kabul city.Keywords: urban hydrology, storm water management, modeling, SWMM, GEO-SWMM, GIS, identification of flood vulnerable areas, urban flooding analysis, sustainable urban drainage
Procedia PDF Downloads 1531096 Teachers' Experience for Improving Fine Motor Skills of Children with Down Syndrome in the Context of Special Education in Southern Province of Sri Lanka
Authors: Sajee A. Gamage, Champa J. Wijesinghe, Patricia Burtner, Ananda R. Wickremasinghe
Abstract:
Background: Teachers working in the context of special education have an enormous responsibility of enhancing performance skills of children in their classroom settings. Fine Motor Skills (FMS) are essential functional skills for children to gain independence in Activities of Daily Living. Children with Down Syndrome (DS) are predisposed to specific challenges due to deficits in FMS. This study is aimed to determine the teachers’ experience on improving FMS of children with DS in the context of special education of Southern Province, Sri Lanka. Methodology: A cross-sectional study was conducted among all consenting eligible teachers (n=147) working in the context of special education in government schools of Southern Province of Sri Lanka. A self-administered questionnaire was developed based on literature and expert opinion to assess teachers’ experience regarding deficits of FMS, limitations of classroom activity performance and barriers to improve FMS of children with DS. Results: Approximately 93% of the teachers were females with a mean age ( ± SD) of 43.1 ( ± 10.1) years. Thirty percent of the teachers had training in special educationand 83% had children with DS in their classrooms. Major deficits of FMS reported were deficits in grasping (n=116; 79%), in-hand manipulation (n=103; 70%) and bilateral hand use (n=99; 67.3%). Paperwork (n=70; 47.6%), painting (n=58; 39.5%), scissor work (n=50; 34.0%), pencil use for writing (n=45; 30.6%) and use of tools in the classroom (n=41; 27.9%) were identified as major classroom performance limitations of children with DS. Parental factors (n=67; 45.6%), disease specific characteristics (n=58; 39.5%) and classroom factors (n=36; 24.5%), were identified as major barriers to improve FMS in the classroom setting. Lack of resources and standard tools, social stigma and late school admission were also identified as barriers to FMS training. Eighty nine percent of the teachers informed that training fine motor activities in a special education classroom was more successful than work with normal classroom setting. Conclusion: Major areas of FMS deficits were grasping, in-hand manipulation and bilateral hand use; classroom performance limitations included paperwork, painting and scissor work of children with DS. Teachers recommended regular practice of fine motor activities according to individual need. Further research is required to design a culturally specific FMS assessment tool and intervention methods to improve FMS of children with DS in Sri Lanka.Keywords: classroom activities, Down syndrome, experience, fine motor skills, special education, teachers
Procedia PDF Downloads 1531095 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering
Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar
Abstract:
High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.Keywords: high-entropy ceramics, borides, carbides, dual-phase
Procedia PDF Downloads 1721094 Effects of Starvation, Glucose Treatment and Metformin on Resistance in Chronic Myeloid Leukemia Cells
Authors: Nehir Nebioglu
Abstract:
Chemotherapy is widely used for the treatment of cancer. Doxorubicin is an anti-cancer chemotherapy drug that is classified as an anthracycline antibiotic. Antitumor antibiotics consist of natural products produced by species of the soil fungus Streptomyces. These drugs act in multiple phases of the cell cycle and are known cell-cycle specific. Although DOX is a precious clinical antineoplastic agent, resistance is also a problem that limits its utility besides cardiotoxicity problem. The drug resistance of cancer cells results from multiple factors including individual variation, genetic heterogeneity within a tumor, and cellular evolution. The mechanism of resistance is thought to involve, in particular, ABCB1 (MDR1, Pgp) and ABCC1 (MRP1) as well as other transporters. Several studies on DOX-resistant cell lines have shown that resistance can be overcome by an inhibition of ABCB1, ABCC1, and ABCC2. This study attempts to understand the effects of different concentration levels of glucose treatment and starvation on the proliferation of Doxorubicin resistant cancer cells lines. To understand the effect of starvation, K562/Dox and K562 cell lines were treated with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM Dox concentrations in both starvation and normal medium conditions. In addition to this, to interpret the effect of glucose treatment, different concentrations (0, 1 mM, 5 mM, 25 mM) of glucose were applied to Dox-treated (with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM) K562/Dox and K652 cell lines. All results show significant decreasing in the cell count of K562/Dox, when cells were starved. However, while proliferation of K562/Dox lines decrease is associated with the increasingly applied Dox concentration, K562/Dox starved ones remain at the same proliferation level. Thus, the results imply that an amount of K562/Dox lines gain starvation resistance and remain resistant. Furthermore, for K562/Dox, there is no clear effect of glucose treatment in terms of cell proliferation. In the presence of a moderate level of glucose (5 mM), proliferation increases compared to other concentration of glucose for each different Dox application. On the other hand, a significant increase in cell proliferation in moderate level of glucose is only observed in 5 uM Dox concentration. The moderate concentration level of Dox can be examined in further studies. For the high amount of glucose (25 mM), cell proliferation levels are lower than moderate glucose application. The reason could be high amount of glucose may not be absorbable by cells. Also, in the presence of low amount of glucose, proliferation is decreasing in an orderly manner of increase in Dox concentration. This situation can be explained by the glucose depletion -Warburg effect- in the literature.Keywords: drug resistance, cancer cells, chemotherapy, doxorubicin
Procedia PDF Downloads 1761093 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis
Authors: Sakshi Piplani, Ajit Kumar
Abstract:
Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid
Procedia PDF Downloads 2551092 The Importance of Clinical Pharmacy and Computer Aided Drug Design
Authors: Mario Hanna Louis Hanna
Abstract:
The use of CAD (pc Aided layout) generation is ubiquitous inside the structure, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of structure faculties in Nigeria as an important part of the training module. This newsletter examines the moral troubles involved in implementing CAD (pc Aided layout) content into the architectural training curriculum. Using current literature, this study begins with the advantages of integrating CAD into architectural education and the responsibilities of various stakeholders in the implementation process. It also examines issues related to the terrible use of records generation and the perceived bad effect of CAD use on design creativity. The use of a survey technique, information from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli college changed into accumulated to serve as a case observe on how the problems raised have been being addressed. The object draws conclusions on what guarantees a hit moral implementation. Tens of millions of human beings around the sector suffer from hepatitis C, one of the international's deadliest sicknesses. Interferon (IFN) is a remedy alternative for patients with hepatitis C, but these treatments have their aspect outcomes. Our research targeted growing an oral small molecule drug that goals hepatitis C virus (HCV) proteins and has fewer facet effects. Our contemporary study targets to broaden a drug primarily based on a small molecule antiviral drug precise for the hepatitis C virus (HCV). Drug improvement and the use of laboratory experiments isn't always best high-priced, however also time-eating to behavior those experiments. instead, on this in silicon have a look at, we used computational strategies to recommend a particular antiviral drug for the protein domain names of discovered in the hepatitis C virus. This examines used homology modeling and abs initio modeling to generate the 3-D shape of the proteins, then figuring out pockets within the proteins. Proper lagans for pocket pills were advanced the usage of the de novo drug design method. Pocket geometry is taken into consideration while designing ligands. A few of the various lagans generated, a different for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, Hepatitis C virus (HCV) CAD (Computer Aided Design), CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication.
Procedia PDF Downloads 31