Search results for: non-linear phase shift
1330 Female Athlete Triad: How Much Is Known
Authors: Nadine Abuqtaish
Abstract:
Females’ participation in athletic sports events has increased in the last decades, and the discovery of eating disorders and menstrual dysfunction has been evident since the early 1980s. The term “Female athlete triad” was initially defined by the Task Force on Women’s Issues of the American College of Sports Medicine (ACSM) in 1992. Menstrual irregularities have been prevalent in competitive female athletes, especially in their adolescence and early adulthood age. Nutritional restrictions to maintain a certain physique and lean look are sought to be advantageous in female athletes such as gymnastics, cheerleading, or weight-sensitive sports such as endurance sports (cycling and marathoners). This stress places the female at risk of irregularities in their menstrual cycle which can lead them to lose their circadian estrogen levels. Estrogen is an important female reproductive hormone that plays a role in maintaining bone mass. Bone mineral density peaks by the age 25. Inadequate estrogen due to missed menstrual cycle or amenorrhea has been estimated to cause a yearly loss of 2% of bone mass, increasing the risk of osteoporosis in the postmenopausal phase. This paper is intended to have a better depth understanding of whether female athletes are being monitored by their official entities or coaches. A qualitative research method through online search engines and keywords “females, athletes, triad, amenorrhea, anorexia, osteoporosis” were used to collect the available primary sources from official public library databases. The latest consensus was published in 2014 by the Female Athlete Triad Coalition and the need for further research and emphasis on this issue is still lacking.Keywords: female, athlete, triad, amenorrhea, anorexia, bone loss
Procedia PDF Downloads 631329 De-Securitizing Identity: Narrative (In)Consistency in Periods of Transition
Authors: Katerina Antoniou
Abstract:
When examining conflicts around the world, it is evident that the majority of intractable conflicts are steeped in identity. Identity seems to be not only a causal variable for conflict, but also a catalytic parameter for the process of reconciliation that follows ceasefire. This paper focuses on the process of identity securitization that occurs between rival groups of heterogeneous collective identities – ethnic, national or religious – as well as on the relationship between identity securitization and the ability of the groups involved to reconcile. Are securitized identities obstacles to the process of reconciliation, able to hinder any prospects of peace? If the level to which an identity is securitized is catalytic to a conflict’s discourse and settlement, then which factors act as indicators of identity de-securitization? The level of an in-group’s identity securitization can be estimated through a number of indicators, one of which is narrative. The stories, views and stances each in-group adopts in relation to its history of conflict and relation with their rival out-group can clarify whether that specific in-group feels victimized and threatened or safe and ready to reconcile. Accordingly, this study discusses identity securitization through narrative in relation to intractable conflicts. Are there conflicts around the world that, despite having been identified as intractable, stagnated or insoluble, show signs of identity de-securitization through narrative? This inquiry uses the case of the Cyprus conflict and its partitioned societies to present official narratives from the two communities and assess whether these narratives have transformed, indicating a less securitized in-group identity for the Greek and Turkish Cypriots. Specifically, the study compares the official historical overviews presented by each community’s Ministry of Foreign Affairs website and discusses the extent to which the two official narratives present a securitized collective identity. In addition, the study will observe whether official stances by the two communities – as adopted by community leaders – have transformed to depict less securitization over time. Additionally, the leaders’ reflection of popular opinion is evaluated through recent opinion polls from each community. Cyprus is currently experiencing renewed optimism for reunification, with the leaders of its two communities engaging in rigorous negotiations, and with rumors calling for a potential referendum for reunification to be taking place even as early as within 2016. Although leaders’ have shown a shift in their rhetoric and have moved away from narratives of victimization, this is not the case for the official narratives used by their respective ministries of foreign affairs. The study’s findings explore whether this narrative inconsistency proves that Cyprus is transitioning towards reunification, or whether the leaders are risking sending a securitized population to the polls to reject a potential reunification. More broadly, this study suggests that in the event that intractable conflicts might be moving towards viable peace, in-group narratives--official narratives in particular--can act as indicators of the extent to which rival entities have managed to reconcile.Keywords: conflict, identity, narrative, reconciliation
Procedia PDF Downloads 3241328 A Study of Basic and Reactive Dyes Removal from Synthetic and Industrial Wastewater by Electrocoagulation Process
Authors: Almaz Negash, Dessie Tibebe, Marye Mulugeta, Yezbie Kassa
Abstract:
Large-scale textile industries use large amounts of toxic chemicals, which are very hazardous to human health and environmental sustainability. In this study, the removal of various dyes from effluents of textile industries using the electrocoagulation process was investigated. The studied dyes were Reactive Red 120 (RR-120), Basic Blue 3 (BB-3), and Basic Red 46 (BR-46), which were found in samples collected from effluents of three major textile factories in the Amhara region, Ethiopia. For maximum removal, the dye BB-3 required an acidic pH 3, RR120 basic pH 11, while BR-46 neutral pH 7 conditions. BB-3 required a longer treatment time of 80 min than BR46 and RR-120, which required 30 and 40 min, respectively. The best removal efficiency of 99.5%, 93.5%, and 96.3% was achieved for BR-46, BB-3, and RR-120, respectively, from synthetic wastewater containing 10 mg L1of each dye at an applied potential of 10 V. The method was applied to real textile wastewaters and 73.0 to 99.5% removal of the dyes was achieved, Indicating Electrocoagulation can be used as a simple, and reliable method for the treatment of real wastewater from textile industries. It is used as a potentially viable and inexpensive tool for the treatment of textile dyes. Analysis of the electrochemically generated sludge by X-ray Diffraction, Scanning Electron Microscope, and Fourier Transform Infrared Spectroscopy revealed the expected crystalline aluminum oxides (bayerite (Al(OH)3 diaspore (AlO(OH)) found in the sludge. The amorphous phase was also found in the floc. Textile industry owners should be aware of the impact of the discharge of effluents on the Ecosystem and should use the investigated electrocoagulation method for effluent treatment before discharging into the environment.Keywords: electrocoagulation, aluminum electrodes, Basic Blue 3, Basic Red 46, Reactive Red 120, textile industry, wastewater
Procedia PDF Downloads 531327 Experimental Study of Particle Deposition on Leading Edge of Turbine Blade
Authors: Yang Xiao-Jun, Yu Tian-Hao, Hu Ying-Qi
Abstract:
Breathing in foreign objects during the operation of the aircraft engine, impurities in the aircraft fuel and products of incomplete combustion can produce deposits on the surface of the turbine blades. These deposits reduce not only the turbine's operating efficiency but also the life of the turbine blades. Based on the small open wind tunnel, the simulation of deposits on the leading edge of the turbine has been carried out in this work. The effect of film cooling on particulate deposition was investigated. Based on the analysis, the adhesive mechanism for the molten pollutants’ reaching to the turbine surface was simulated by matching the Stokes number, TSP (a dimensionless number characterizing particle phase transition) and Biot number of the test facility and that of the real engine. The thickness distribution and growth trend of the deposits have been observed by high power microscope and infrared camera under different temperature of the main flow, the solidification temperature of the particulate objects, and the blowing ratio. The experimental results from the leading edge particulate deposition demonstrate that the thickness of the deposition increases with time until a quasi-stable thickness is reached, showing a striking effect of the blowing ratio on the deposition. Under different blowing ratios, there exists a large difference in the thickness distribution of the deposition, and the deposition is minimal at the specific blow ratio. In addition, the temperature of main flow and the solidification temperature of the particulate have a great influence on the deposition.Keywords: deposition, experiment, film cooling, leading edge, paraffin particles
Procedia PDF Downloads 1461326 Self-Organized TiO₂–Nb₂O₅–ZrO₂ Nanotubes on β-Ti Alloy by Anodization
Authors: Muhammad Qadir, Yuncang Li, Cuie Wen
Abstract:
Surface properties such as topography and physicochemistry of metallic implants determine the cell behavior. The surface of titanium (Ti)-based implant can be modified to enhance the bioactivity and biocompatibility. In this study, a self-organized titania–niobium pentoxide–zirconia (TiO₂–Nb₂O₅–ZrO₂) nanotubular layer on β phase Ti35Zr28Nb alloy was fabricated via electrochemical anodization. Energy-dispersive X-ray spectroscopy (EDX), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and water contact angle measurement techniques were used to investigate the nanotubes dimensions (i.e., the inner and outer diameters, and wall thicknesses), microstructural features and evolution of the hydrophilic properties. The in vitro biocompatibility of the TiO₂–Nb₂O₅–ZrO₂ nanotubes (NTs) was assessed by using osteoblast cells (SaOS2). Influence of anodization parameters on the morphology of TiO₂–Nb₂O₅–ZrO₂ NTs has been studied. The results indicated that the average inner diameter, outer diameter and the wall thickness of the TiO₂–Nb₂O₅–ZrO₂ NTs were ranged from 25–70 nm, 45–90 nm and 5–13 nm, respectively, and were directly influenced by the applied voltage during anodization. The average inner and outer diameters of NTs increased with increasing applied voltage, and the length of NTs increased with increasing anodization time and water content of the electrolyte. In addition, the size distribution of the NTs noticeably affected the hydrophilic properties and enhanced the biocompatibility as compared with the uncoated substrate. The results of this study could be considered for developing nano-scale coatings for a wide range of biomedical applications.Keywords: Titanium alloy, TiO₂–Nb₂O₅–ZrO₂ nanotubes, anodization, surface wettability, biocompatibility
Procedia PDF Downloads 1551325 Understanding of Malaysian Community Disaster Resilience: Australian Scorecard Adaptation
Authors: Salizar Mohamed Ludin, Mohd Khairul Hasyimi Firdaus, Paul Arbon
Abstract:
Purpose: This paper aims to develop Malaysian Government and community-level critical thinking, planning and action for improving community disaster resilience by reporting Phase 1, Part 1 of a larger community disaster resilience measurement study about adapting the Torrens Resilience Institute Australian Community Disaster Resilience Scorecard to the Malaysian context. Methodology: Pparticipatory action research encouraged key people involved in managing the six most affected areas in the 2014 flooding of Kelantan in Malaysia’s north-east to participate in discussions about adapting and self-testing the Australian Community Disaster Resilience Scorecard to measure and improve their communities’ disaster resilience. Findings: Communities need to strengthen their disaster resilience through better communication, cross-community cooperation, maximizing opportunities to compare their plans, actions and reactions with those reported in research publications, and aligning their community disaster management with reported best practice internationally while acknowledging the need to adapt such practice to local contexts. Research implications: There is a need for a Malaysia-wide, simple-to-use, standardized disaster resilience scorecard to improve the quality, quantity and capability of healthcare and emergency services’ preparedness, and to facilitate urgent reallocation of aid. Value: This study is the first of its kind in Malaysia. The resulting community disaster resilience guideline based on participants’ feedback about the Kelantan floods and scorecard self-testing has the potential for further adaptation to suit contexts across Malaysia, as well as demonstrating how the scorecard can be adapted for international use.Keywords: community disaster resilience, CDR Scorecard, participatory action research, flooding, Malaysia
Procedia PDF Downloads 3361324 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones
Authors: Vineesh Amin, Ananya Agrawal
Abstract:
In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling
Procedia PDF Downloads 2091323 Cities Under Pressure: Unraveling Urban Resilience Challenges
Authors: Sherine S. Aly, Fahd A. Hemeida, Mohamed A. Elshamy
Abstract:
In the face of rapid urbanization and the myriad challenges posed by climate change, population growth, and socio-economic disparities, fostering urban resilience has become paramount. This abstract offers a comprehensive overview of the study on "Urban Resilience Challenges," exploring the background, methodologies, major findings, and concluding insights. The paper unveils a spectrum of challenges encompassing environmental stressors and deep-seated socio-economic issues, such as unequal access to resources and opportunities. Emphasizing their interconnected nature, the study underscores the imperative for holistic and integrated approaches to urban resilience, recognizing the intricate web of factors shaping the urban landscape. Urbanization has witnessed an unprecedented surge, transforming cities into dynamic and complex entities. With this growth, however, comes an array of challenges that threaten the sustainability and resilience of urban environments. This study seeks to unravel the multifaceted urban resilience challenges, exploring their origins and implications for contemporary cities. Cities serve as hubs of economic, social, and cultural activities, attracting diverse populations seeking opportunities and a higher quality of life. However, the urban fabric is increasingly strained by climate-related events, infrastructure vulnerabilities, and social inequalities. Understanding the nuances of these challenges is crucial for developing strategies that enhance urban resilience and ensure the longevity of cities as vibrant and adaptive entities. This paper endeavors to discern strategic guidelines for enhancing urban resilience amidst the dynamic challenges posed by rapid urbanization. The study aims to distill actionable insights that can inform strategic approaches. Guiding the formulation of effective strategies to fortify cities against multifaceted pressures. The study employs a multifaceted approach to dissect urban resilience challenges. A qualitative method will be employed, including comprehensive literature reviews and data analysis of urban vulnerabilities that provided valuable insights into the lived experiences of resilience challenges in diverse urban settings. In conclusion, this study underscores the urgency of addressing urban resilience challenges to ensure the sustained vitality of cities worldwide. The interconnected nature of these challenges necessitates a paradigm shift in urban planning and governance. By adopting holistic strategies that integrate environmental, social, and economic considerations, cities can navigate the complexities of the 21st century. The findings provide a roadmap for policymakers, planners, and communities to collaboratively forge resilient urban futures that withstand the challenges of an ever-evolving urban landscape.Keywords: resilient principles, risk management, sustainable cities, urban resilience
Procedia PDF Downloads 541322 Analysis of the Level of Production Failures by Implementing New Assembly Line
Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk
Abstract:
The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control
Procedia PDF Downloads 1281321 Growth Comparison and Intestinal Health in Broilers Fed Scent Leaf Meal (Ocimum gratissimum) and Synthetic Antibiotic
Authors: Adedoyin Akintunde Adedayo, Onilude Abiodun Anthony
Abstract:
The continuous usage of synthetic antibiotics in livestock production has led to the resistance of microbial pathogens. This has prompted research to find alternative sources. This study aims to compare the growth and intestinal health of broilers fed scent leaf meal (SLM) as an alternative to synthetic antibiotics. The study used a completely randomized design (CRD) with 300 one-week-old Arbor Acres broiler chicks. The chicks were divided into six treatments with five replicates of ten birds each. The feeding trial lasted 49 days, including a one-week acclimatization period. Commercial broiler diets were used. The diets included a negative control (no leaf meal or antibiotics), a positive control (0.10% oxy-tetracycline), and four diets with different levels of SLM (0.5%, 1.0%, 1.5%, and 2.0%). The supplementation of both oxy-tetracycline and SLM improved feed intake during the finisher phase. Birds fed SLM at a 1% inclusion level showed significantly (P<0.05) improved average body weight gain (ABWG), lowered feed-to-gain ratio, and cost per kilogram of weight gain compared to other diets. The mortality (2.0%) rate was significantly higher in the negative control group. White blood cell levels varied significantly (P<0.05) in birds fed SLM-supplemented diets, and the use of 2% SLM led to an increase in liver weight. However, welfare indices were not compromised.Keywords: Arbor Acres, phyto-biotic, synthetic antibiotic, white blood cell, liver weight
Procedia PDF Downloads 741320 Urban Planning Patterns after (COVID-19): An Assessment toward Resiliency
Authors: Mohammed AL-Hasani
Abstract:
The Pandemic COVID-19 altered the daily habits and affected the functional performance of the cities after this crisis leaving remarkable impacts on many metropolises worldwide. It is so obvious that having more densification in the city leads to more threats altering this main approach that was called for achieving sustainable development. The main goal to achieve resiliency in the cities, especially in forcing risks, is to deal with a planning system that is able to resist, absorb, accommodate and recover from the impacts that had been affected. Many Cities in London, Wuhan, New York, and others worldwide carried different planning approaches and varied in reaction to safeguard the impacts of the pandemic. The cities globally varied from the radiant pattern predicted by Le Corbusier, or having multi urban centers more like the approach of Frank Lloyd Wright’s Broadacre City, or having linear growth or gridiron expansion that was common by Doxiadis, compact pattern, and many other hygiene patterns. These urban patterns shape the spatial distribution and Identify both open and natural spaces with gentrified and gentrifying areas. This crisis paid attention to reassess many planning approaches and examine the existing urban patterns focusing more on the aim of continuity and resiliency in managing the crises within the rapid transformation and the power of market forces. According to that, this paper hypothesized that those urban planning patterns determine the method of reaction in assuring quarantine for the inhabitance and the performance of public services and need to be updated through carrying out an innovative urban management system and adopt further resilience patterns in prospective urban planning approaches. This paper investigates the adaptivity and resiliency of variant urban planning patterns regarding selected cities worldwide that affected by COVID-19 and their role in applying certain management strategies in controlling the pandemic spread, finding out the main potentials that should be included in prospective planning approaches. The examination encompasses the spatial arrangement, blocks definition, plots arrangement, and urban space typologies. This paper aims to investigate the urban patterns to deliberate also the debate between densification as one of the more sustainable planning approaches and disaggregation tendency that was followed after the pandemic by restructuring and managing its application according to the assessment of the spatial distribution and urban patterns. The biggest long-term threat to dense cities proves the need to shift to online working and telecommuting, creating a mixture between using cyber and urban spaces to remobilize the city. Reassessing spatial design and growth, open spaces, urban population density, and public awareness are the main solutions that should be carried out to face the outbreak in our current cities that should be managed from global to tertiary levels and could develop criteria for designing the prospective citiesKeywords: COVID-19, densification, resiliency, urban patterns
Procedia PDF Downloads 1301319 Bi-Liquid Free Surface Flow Simulation of Liquid Atomization for Bi-Propellant Thrusters
Authors: Junya Kouwa, Shinsuke Matsuno, Chihiro Inoue, Takehiro Himeno, Toshinori Watanabe
Abstract:
Bi-propellant thrusters use impinging jet atomization to atomize liquid fuel and oxidizer. Atomized propellants are mixed and combusted due to auto-ignitions. Therefore, it is important for a prediction of thruster’s performance to simulate the primary atomization phenomenon; especially, the local mixture ratio can be used as indicator of thrust performance, so it is useful to evaluate it from numerical simulations. In this research, we propose a numerical method for considering bi-liquid and the mixture and install it to CIP-LSM which is a two-phase flow simulation solver with level-set and MARS method as an interfacial tracking method and can predict local mixture ratio distribution downstream from an impingement point. A new parameter, beta, which is defined as the volume fraction of one liquid in the mixed liquid within a cell is introduced and the solver calculates the advection of beta, inflow and outflow flux of beta to a cell. By validating this solver, we conducted a simple experiment and the same simulation by using the solver. From the result, the solver can predict the penetrating length of a liquid jet correctly and it is confirmed that the solver can simulate the mixing of liquids. Then we apply this solver to the numerical simulation of impinging jet atomization. From the result, the inclination angle of fan after the impingement in the bi-liquid condition reasonably agrees with the theoretical value. Also, it is seen that the mixture of liquids can be simulated in this result. Furthermore, simulation results clarify that the injecting condition affects the atomization process and local mixture ratio distribution downstream drastically.Keywords: bi-propellant thrusters, CIP-LSM, free-surface flow simulation, impinging jet atomization
Procedia PDF Downloads 2791318 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 781317 Reconceptualising Faculty Teaching Competence: The Role of Agency during the Pandemic
Authors: Ida Fatimawati Adi Badiozaman, Augustus Raymond Segar
Abstract:
The Covid-19 pandemic transformed teaching contexts at an unprecedented level. Although studies have focused mainly on its impact on students, little is known about how emergency online teaching affects faculty members in higher education. Given that the pandemic has robbed teachers of opportunities for adequate preparation, it is vital to understand how teaching competencies were perceived in the crisis-response transition to online teaching and learning (OTL). Therefore, the study explores how academics perceive their readiness for OTL and what competencies were perceived to be central. Therefore, through a mixed-methods design, the study first explores through a survey how academics perceive their readiness for OTL and what competencies were perceived to be central. Emerging trends from the quantitative data of 330 academics (three public and three private Higher learning institutions) led to the formulation of interview guides for the subsequent qualitative phase. The authors use critical sensemaking (CSM) to analyse interviews with twenty-two teachers (n = 22) (three public; three private HEs) toward understanding the interconnected layers of influences they draw from as they make sense of their teaching competence. The sensemaking process reframed competence and readiness in that agentic competency emerged as crucial in shaping resilience and adaptability during the transition to OTL. The findings also highlight professional learningcriticalto teacher competence: course design, communication, time management, technological competence, and identity (re)construction. The findings highlight opportunities for strategic orientation to change during crisis. Implications for pedagogy and policy are discussed.Keywords: online teaching, pedagogical competence, agentic competence, agency, technological competence
Procedia PDF Downloads 811316 The Sustained Utility of Japan's Human Security Policy
Authors: Maria Thaemar Tana
Abstract:
The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.Keywords: human security, foreign policy, neoclassical realism, peace-building
Procedia PDF Downloads 1331315 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 1461314 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities
Authors: Faith Abdul Rasak Asharaf
Abstract:
Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint
Procedia PDF Downloads 791313 Water-Repellent Coating Based on Thermoplastic Polyurethane, Silica Nanoparticles and Graphene Nanoplatelets
Authors: S. Naderizadeh, A. Athanassiou, I. S. Bayer
Abstract:
This work describes a layer-by-layer spraying method to produce a non-wetting coating, based on thermoplastic polyurethane (TPU) and silica nanoparticles (Si-NPs). The main purpose of this work was to transform a hydrophilic polymer to superhydrophobic coating. The contact angle of pure TPU was measured about 77˚ ± 2, and water droplets did not roll away upon tilting even at 90°. But after applying a layer of Si-NPs on top of this, not only the contact angle increased to 165˚ ± 2, but also water droplets can roll away even below 5˚ tilting. The most important restriction in this study was the weak interfacial adhesion between polymer and nanoparticles, which had a bad effect on durability of the coatings. To overcome this problem, we used a very thin layer of graphene nanoplatelets (GNPs) as an interlayer between TPU and Si-NPs layers, followed by thermal treatment at 150˚C. The sample’s morphology and topography were characterized by scanning electron microscopy (SEM), EDX analysis and atomic force microscopy (AFM). It was observed that Si-NPs embedded into the polymer phase in the presence of GNPs layer. It is probably because of the high surface area and considerable thermal conductivity of the graphene platelets. The contact angle value for the sample containing graphene decreased a little bit respected to the coating without graphene and reached to 156.4˚ ± 2, due to the depletion of the surface roughness. The durability of the coatings against abrasion was evaluated by Taber® abrasion test, and it was observed that superhydrophobicity of the coatings remains for a longer time, in the presence of GNPs layer. Due to the simple fabrication method and good durability of the coating, this coating can be used as a durable superhydrophobic coating for metals and can be produced in large scale.Keywords: graphene, silica nanoparticles, superhydrophobicity, thermoplastic polyurethane
Procedia PDF Downloads 1861312 Effect of Inclusion of Moringa oleifera Leaf on Physiological Responses of Broiler Chickens at Finisher Phase during Hot-Dry Season
Authors: Oyegunle Emmanuel Oke, A. O. Onabajo, M. O. Abioja, F. O. Sorungbe, D. E. Oyetunji, J. A. Abiona, A. O. Ladokun, O. M. Onagbesan
Abstract:
An experiment was conducted to determine the effect of different dietary inclusion levels of Moringa oleifera leaf powder (MOLP) on growth and physiological responses of broiler chickens during hot-dry season in Nigeria. Two hundred and forty (240) day-old commercial broiler chicks were randomly allotted to four dietary treatments having four replicates each. Each replicate had 15 birds. The levels of inclusion were 0g (Control group), 4g, 8g and 12g/Kg feed. The experiment lasted for eight weeks. The results of the study revealed that the initial body weight was significantly (P < 0.05) higher in birds fed 12g/kg diet than those fed 0, 4, and 8g MOLP. The birds fed 0, 4 and 8g/kg diet however had similar weights. The final body weight was significantly (P < 0.05) higher in the birds fed 12g MOLP than those fed 0, 4 and 8g MOLP. The final weights were similar in the birds fed 4 and 8g/kg diet but higher (P < 0.05) than those of the birds in the control group. The body weight gain was similar in birds fed 0 and 4g MOLP but significantly higher (P < 0.05) than those of the birds in 12g/kg diet. There were no significant differences (P > 0.05) in the feed intake. The serum albumin of the birds fed 12g MOLP/Kg diet (48.85g/L) was significantly (P < 0.05) higher than the mean value of those fed the control diet 0 and 8g MOLP/Kg diets having 36.05 and 37.10g/L respectively. Birds fed 12g MOLP/Kg feed recorded the lowest level of triglyceride (122.75g/L) which was significantly (P < 0.05) lower than those of the birds fed 0 and 4g/kg diet MOLP. The serum corticosterone decreased with increase in MOLP inclusion levels. The birds fed 12g MOLP had the least value. This study has shown that MOLP may contain potent antioxidants capable of ameliorating the effects of heat stress in broiler chickens with 12g MOLP inclusion.Keywords: physiology, performance, heat stress, anti-oxidant
Procedia PDF Downloads 3511311 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization
Authors: R. O. Osaseri, A. R. Usiobaifo
Abstract:
The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault
Procedia PDF Downloads 3221310 The Use of Emerging Technologies in Higher Education Institutions: A Case of Nelson Mandela University, South Africa
Authors: Ayanda P. Deliwe, Storm B. Watson
Abstract:
The COVID-19 pandemic has disrupted the established practices of higher education institutions (HEIs). Most higher education institutions worldwide had to shift from traditional face-to-face to online learning. The online environment and new online tools are disrupting the way in which higher education is presented. Furthermore, the structures of higher education institutions have been impacted by rapid advancements in information and communication technologies. Emerging technologies should not be viewed in a negative light because, as opposed to the traditional curriculum that worked to create productive and efficient researchers, emerging technologies encourage creativity and innovation. Therefore, using technology together with traditional means will enhance teaching and learning. Emerging technologies in higher education not only change the experience of students, lecturers, and the content, but it is also influencing the attraction and retention of students. Higher education institutions are under immense pressure because not only are they competing locally and nationally, but emerging technologies also expand the competition internationally. Emerging technologies have eliminated border barriers, allowing students to study in the country of their choice regardless of where they are in the world. Higher education institutions are becoming indifferent as technology is finding its way into the lecture room day by day. Academics need to utilise technology at their disposal if they want to get through to their students. Academics are now competing for students' attention with social media platforms such as WhatsApp, Snapchat, Instagram, Facebook, TikTok, and others. This is posing a significant challenge to higher education institutions. It is, therefore, critical to pay attention to emerging technologies in order to see how they can be incorporated into the classroom in order to improve educational quality while remaining relevant in the work industry. This study aims to understand how emerging technologies have been utilised at Nelson Mandela University in presenting teaching and learning activities since April 2020. The primary objective of this study is to analyse how academics are incorporating emerging technologies in their teaching and learning activities. This primary objective was achieved by conducting a literature review on clarifying and conceptualising the emerging technologies being utilised by higher education institutions, reviewing and analysing the use of emerging technologies, and will further be investigated through an empirical analysis of the use of emerging technologies at Nelson Mandela University. Findings from the literature review revealed that emerging technology is impacting several key areas in higher education institutions, such as the attraction and retention of students, enhancement of teaching and learning, increase in global competition, elimination of border barriers, and highlighting the digital divide. The literature review further identified that learning management systems, open educational resources, learning analytics, and artificial intelligence are the most prevalent emerging technologies being used in higher education institutions. The identified emerging technologies will be further analysed through an empirical analysis to identify how they are being utilised at Nelson Mandela University.Keywords: artificial intelligence, emerging technologies, learning analytics, learner management systems, open educational resources
Procedia PDF Downloads 691309 Effect of Gas Boundary Layer on the Stability of a Radially Expanding Liquid Sheet
Authors: Soumya Kedia, Puja Agarwala, Mahesh Tirumkudulu
Abstract:
Linear stability analysis is performed for a radially expanding liquid sheet in the presence of a gas medium. A liquid sheet can break up because of the aerodynamic effect as well as its thinning. However, the study of the aforementioned effects is usually done separately as the formulation becomes complicated and is difficult to solve. Present work combines both, aerodynamic effect and thinning effect, ignoring the non-linearity in the system. This is done by taking into account the formation of the gas boundary layer whilst neglecting viscosity in the liquid phase. Axisymmetric flow is assumed for simplicity. Base state analysis results in a Blasius-type system which can be solved numerically. Perturbation theory is then applied to study the stability of the liquid sheet, where the gas-liquid interface is subjected to small deformations. The linear model derived here can be applied to investigate the instability for sinuous as well as varicose modes, where the former represents displacement in the centerline of the sheet and the latter represents modulation in sheet thickness. Temporal instability analysis is performed for sinuous modes, which are significantly more unstable than varicose modes, for a fixed radial distance implying local stability analysis. The growth rates, measured for fixed wavenumbers, predicated by the present model are significantly lower than those obtained by the inviscid Kelvin-Helmholtz instability and compare better with experimental results. Thus, the present theory gives better insight into understanding the stability of a thin liquid sheet.Keywords: boundary layer, gas-liquid interface, linear stability, thin liquid sheet
Procedia PDF Downloads 2291308 Targeting Glucocorticoid Receptor Eliminate Dormant Chemoresistant Cancer Stem Cells in Glioblastoma
Authors: Aoxue Yang, Weili Tian, Haikun Liu
Abstract:
Brain tumor stem cells (BTSCs) are resistant to therapy and give rise to recurrent tumors. These rare and elusive cells are likely to disseminate during cancer progression, and some may enter dormancy, remaining viable but not increasing. The identification of dormant BTSCs is thus necessary to design effective therapies for glioblastoma (GBM) patients. Glucocorticoids (GCs) are used to treat GBM-associated edema. However, glucocorticoids participate in the physiological response to psychosocial stress, linked to poor cancer prognosis. This raises concern that glucocorticoids affect the tumor and BTSCs. Identifying markers specifically expressed by brain tumor stem cells (BTSCs) may enable specific therapies that spare their regular tissue-resident counterparts. By ribosome profiling analysis, we have identified that glycerol-3-phosphate dehydrogenase 1 (GPD1) is expressed by dormant BTSCs but not by NSCs. Through different stress-induced experiments in vitro, we found that only dexamethasone (DEXA) can significantly increase the expression of GPD1 in NSCs. Adversely, mifepristone (MIFE) which is classified as glucocorticoid receptors antagonists, could decrease GPD1 protein level and weaken the proliferation and stemness in BTSCs. Furthermore, DEXA can induce GPD1 expression in tumor-bearing mice brains and shorten animal survival, whereas MIFE has a distinct adverse effect that prolonged mice lifespan. Knocking out GR in NSC can block the upregulation of GPD1 inducing by DEXA, and we find the specific sequences on GPD1 promotor combined with GR, thus improving the efficiency of GPD1 transcription from CHIP-Seq. Moreover, GR and GPD1 are highly co-stained on GBM sections obtained from patients and mice. All these findings confirmed that GR could regulate GPD1 and loss of GPD1 Impairs Multiple Pathways Important for BTSCs Maintenance GPD1 is also a critical enzyme regulating glycolysis and lipid synthesis. We observed that DEXA and MIFE could change the metabolic profiles of BTSCs by regulating GPD1 to shift the transition of cell dormancy. Our transcriptome and lipidomics analysis demonstrated that cell cycle signaling and phosphoglycerides synthesis pathways contributed a lot to the inhibition of GPD1 caused by MIFE. In conclusion, our findings raise concern that treatment of GBM with GCs may compromise the efficacy of chemotherapy and contribute to BTSC dormancy. Inhibition of GR can dramatically reduce GPD1 and extend the survival duration of GBM-bearing mice. The molecular link between GPD1 and GR may give us an attractive therapeutic target for glioblastoma.Keywords: cancer stem cell, dormancy, glioblastoma, glycerol-3-phosphate dehydrogenase 1, glucocorticoid receptor, dexamethasone, RNA-sequencing, phosphoglycerides
Procedia PDF Downloads 1321307 The Impact of Window Opening Occupant Behavior Models on Building Energy Performance
Authors: Habtamu Tkubet Ebuy
Abstract:
Purpose Conventional dynamic energy simulation tools go beyond the static dimension of simplified methods by providing better and more accurate prediction of building performance. However, their ability to forecast actual performance is undermined by a low representation of human interactions. The purpose of this study is to examine the potential benefits of incorporating information on occupant diversity into occupant behavior models used to simulate building performance. The co-simulation of the stochastic behavior of the occupants substantially increases the accuracy of the simulation. Design/methodology/approach In this article, probabilistic models of the "opening and closing" behavior of the window of inhabitants have been developed in a separate multi-agent platform, SimOcc, and implemented in the building simulation, TRNSYS, in such a way that the behavior of the window with the interconnectivity can be reflected in the simulation analysis of the building. Findings The results of the study prove that the application of complex behaviors is important to research in predicting actual building performance. The results aid in the identification of the gap between reality and existing simulation methods. We hope this study and its results will serve as a guide for researchers interested in investigating occupant behavior in the future. Research limitations/implications Further case studies involving multi-user behavior for complex commercial buildings need to more understand the impact of the occupant behavior on building performance. Originality/value This study is considered as a good opportunity to achieve the national strategy by showing a suitable tool to help stakeholders in the design phase of new or retrofitted buildings to improve the performance of office buildings.Keywords: occupant behavior, co-simulation, energy consumption, thermal comfort
Procedia PDF Downloads 1041306 A Comparative Human Rights Analysis of Expulsion as a Counterterrorism Instrument: An Evaluation of Belgium
Authors: Louise Reyntjens
Abstract:
Where criminal law used to be the traditional response to cope with the terrorist threat, European governments are increasingly relying on administrative paths. The reliance on immigration law fits into this trend. Terrorism is seen as a civilization menace emanating from abroad. In this context, the expulsion of dangerous aliens, immigration law’s core task, is put forward as a key security tool. Governments all over Europe are focusing on removing dangerous individuals from their territory rather than bringing them to justice. This research reflects on the consequences for the expelled individuals’ fundamental rights. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, igniting the recourse to immigration law as a counterterrorism tool. Yet, they adopt a very different approach on this: the United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also 'securitized' its immigration policy after the recent terrorist hit in Stockholm, but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This paper addresses the situation in Belgium. In 2017, the Belgian parliament introduced several legislative changes by which it considerably expanded and facilitated the possibility to expel unwanted aliens. First, the expulsion measure was subjected to new and questionably definitions: a serious attack on the nation’s safety used to be required to expel certain categories of aliens. Presently, mere suspicions suffice to fulfil the new definition of a 'serious threat to national security'. A definition which fails to respond to the principle of legality; the law, nor the prepatory works clarify what is meant by 'a threat to national security'. This creates the risk of submitting this concept’s interpretation almost entirely to the discretion of the immigration authorities. Secondly, in name of intervening more quickly and efficiently, the automatic suspensive appeal for expulsions was abolished. The European Court of Human Rights nonetheless requires such an automatic suspensive appeal under Article 13 and 3 of the Convention. Whether this procedural reform will stand to endure, is thus questionable. This contribution also raises questions regarding expulsion’s efficacy as a key security tool. In a globalized and mobilized world, particularly in a European Union with no internal boundaries, questions can be raised about the usefulness of this measure. Even more so, by simply expelling a dangerous individual, States avoid their responsibility and shift the risk to another State. Criminal law might in these instances be more capable of providing a conclusive and long term response. This contribution explores the human rights consequences of expulsion as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.Keywords: Belgium, counter-terrorism and human rights, expulsion, immigration law
Procedia PDF Downloads 1271305 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities
Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos
Abstract:
The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification
Procedia PDF Downloads 4761304 Azadrachea indica Leaves Extract Assisted Green Synthesis of Ag-TiO₂ for Degradation of Dyes in Aqueous Medium
Authors: Muhammad Saeed, Sheeba Khalid
Abstract:
Aqueous pollution due to the textile industry is an important issue. Photocatalysis using metal oxides as catalysts is one of the methods used for eradication of dyes from textile industrial effluents. In this study, the synthesis, characterization, and evaluation of photocatalytic activity of Ag-TiO₂ are reported. TiO₂ catalysts with 2, 4, 6 and 8% loading of Ag were prepared by green methods using Azadrachea indica leaves' extract as reducing agent and titanium dioxide and silver nitrate as precursor materials. The 4% Ag-TiO₂ exhibited the best catalytic activity for degradation of dyes. Prepared catalyst was characterized by advanced techniques. Catalytic degradation of methylene blue and rhodamine B were carried out in Pyrex glass batch reactor. Deposition of Ag greatly enhanced the catalytic efficiency of TiO₂ towards degradation of dyes. Irradiation of catalyst excites electrons from conduction band of catalyst to valence band yielding an electron-hole pair. These photoexcited electrons and positive hole undergo secondary reaction and produce OH radicals. These active radicals take part in the degradation of dyes. More than 90% of dyes were degraded in 120 minutes. It was found that there was no loss catalytic efficiency of prepared Ag-TiO₂ after recycling it for two times. Photocatalytic degradation of methylene blue and rhodamine B followed Eley-Rideal mechanism which states that dye reacts in fluid phase with adsorbed oxygen. 27 kJ/mol and 20 kJ/mol were found as activation energy for photodegradation of methylene blue and rhodamine B dye respectively.Keywords: TiO₂, Ag-TiO₂, methylene blue, Rhodamine B., photo degradation
Procedia PDF Downloads 1651303 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol
Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang
Abstract:
Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function
Procedia PDF Downloads 1941302 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1221301 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 355