Search results for: small device
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6563

Search results for: small device

683 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 184
682 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort

Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri

Abstract:

Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.

Keywords: early treatment, fluency, preschool children, stuttering

Procedia PDF Downloads 217
681 A Model for a Continuous Professional Development Program for Early Childhood Teachers in Villages: Insights from the Coaching Pilot in Indonesia

Authors: Ellen Patricia, Marilou Hyson

Abstract:

Coaching has been showing great potential to strengthen the impact of brief group trainings and help early childhood teachers solve specific problems at work with the goal of raising the quality of early childhood services. However, there have been some doubts about the benefits that village teachers can receive from coaching. It is perceived that village teachers may struggle with the thinking skills needed to make coaching beneficial. Furthermore, there are reservations about whether principals and supervisors in villages are open to coaching’s facilitative approach, as opposed to the directive approach they have been using. As such, the use of coaching to develop the professionalism of early childhood teachers in the villages needs to be examined. The Coaching Pilot for early childhood teachers in Indonesia villages provides insights for the above issues. The Coaching Pilot is part of the ECED Frontline Pilot, which is a collaboration project between the Government of Indonesia and the World Bank with the support from the Australian Government (DFAT). The Pilot started with coordinated efforts with the local government in two districts to select principals and supervisors who have been equipped with basic knowledge about early childhood education to take part in 2-days coaching training. Afterwards, the participants were asked to collect 25 hours of coaching early childhood teachers who have participated in the Enhanced Basic Training for village teachers. The participants who completed this requirement were then invited to come for an assessment of their coaching skills. Following that, a qualitative evaluation was conducted using in-depth interviews and Focus Group Discussion techniques. The evaluation focuses on the impact of the coaching pilot in helping the village teachers to develop in their professionalism, as well as on the sustainability of the intervention. Results from the evaluation indicated that although their low education may limit their thinking skills, village teachers benefited from the coaching that they received. Moreover, the evaluation results also suggested that with enough training and support, principals and supervisors in the villages were able to provide an adequate coaching service for the teachers. On top of that, beyond this small start, interest is growing, both within the pilot districts and even beyond, due to word of mouth of the benefits that the Coaching Pilot has created. The districts where coaching was piloted have planned to continue the coaching program, since a number of early childhood teachers have requested to be coached, and a number of principals and supervisors have also requested to be trained as a coach. Furthermore, the Association for Early Childhood Educators in Indonesia has started to adopt coaching into their program. Although further research is needed, the Coaching Pilot suggests that coaching can positively impact early childhood teachers in villages, and village principals and supervisors can become a promising source of future coaches. As such, coaching has a significant potential to become a sustainable model for a continuous professional development program for early childhood teachers in villages.

Keywords: coaching, coaching pilot, early childhood teachers, principals and supervisors, village teachers

Procedia PDF Downloads 240
680 Association between Maternal Personality and Postnatal Mother-to-Infant Bonding

Authors: Tessa Sellis, Marike A. Wierda, Elke Tichelman, Mirjam T. Van Lohuizen, Marjolein Berger, François Schellevis, Claudi Bockting, Lilian Peters, Huib Burger

Abstract:

Introduction: Most women develop a healthy bond with their children, however, adequate mother-to-infant bonding cannot be taken for granted. Mother-to-infant bonding refers to the feelings and emotions experienced by the mother towards her child. It is an ongoing process that starts during pregnancy and develops during the first year postpartum and likely throughout early childhood. The prevalence of inadequate bonding ranges from 7 to 11% in the first weeks postpartum. An impaired mother-to-infant bond can cause long-term complications for both mother and child. Very little research has been conducted on the direct relationship between the personality of the mother and mother-to-infant bonding. This study explores the associations between maternal personality and postnatal mother-to-infant bonding. The main hypothesis is that there is a relationship between neuroticism and mother-to-infant bonding. Methods: Data for this study were used from the Pregnancy Anxiety and Depression Study (2010-2014), which examined symptoms of and risk factors for anxiety or depression during pregnancy and the first year postpartum of 6220 pregnant women who received primary, secondary or tertiary care in the Netherlands. The study was expanded in 2015 to investigate postnatal mother-to-infant bonding. For the current research 3836 participants were included. During the first trimester of gestation, baseline characteristics, as well as personality, were measured through online questionnaires. Personality was measured by the NEO Five Factor Inventory (NEO-FFI), which covers the big five of personality (neuroticism, extraversion, openness, altruism and conscientiousness). Mother-to-infant bonding was measured postpartum by the Postpartum Bonding Questionnaire (PBQ). Univariate linear regression analysis was performed to estimate the associations. Results: 5% of the PBQ-respondents reported impaired bonding. A statistically significant association was found between neuroticism and mother-to-infant bonding (p < .001): mothers scoring higher on neuroticism, reported a lower score on mother-to-infant bonding. In addition, a positive correlation was found between the personality traits extraversion (b: -.081), openness (b: -.014), altruism (b: -.067), conscientiousness (b: -.060) and mother-to-infant bonding. Discussion: This study is one of the first to demonstrate a direct association between the personality of the mother and mother-to-infant bonding. A statistically significant relationship has been found between neuroticism and mother-to-infant bonding, however, the percentage of variance predictable by a personality dimension is very small. This study has examined one part of the multi-factorial topic of mother-to-infant bonding and offers more insight into the rarely investigated and complex matter of mother-to-infant bonding. For midwives, it is important recognize the risks for impaired bonding and subsequently improve policy for women at risk.

Keywords: mother-to-infant bonding, personality, postpartum, pregnancy

Procedia PDF Downloads 364
679 Combination of Silver-Curcumin Nanoparticle for the Treatment of Root Canal Infection

Authors: M. Gowri, E. K. Girija, V. Ganesh

Abstract:

Background and Significance: Among the dental infections, inflammation and infection of the root canal are common among all age groups. Currently, the management of root canal infections involves cleaning the canal with powerful irrigants followed by intracanal medicament application. Though these treatments have been in vogue for a long time, root canal failures do occur. Treatment for root canal infections is limited due to the anatomical complexity in terms of small micrometer volumes and poor penetration of drugs. Thus, infections of the root canal seem to be a challenge that demands development of new agents that can eradicate C. albicans. Methodology: In the present study, we synthesized and screened silver-curcumin nanoparticle against Candida albicans. Detailed molecular studies were carried out with silver-curcumin nanoparticle on C. albicans pathogenicity. Morphological cell damage and antibiofilm activity of silver-curcumin nanoparticle on C. albicans was studied using scanning electron microscopy (SEM). Biochemical evidence for membrane damage was studied using flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model. Results: Screening data showed that silver-curcumin nanoparticle was active against C. albicans. Silver-curcumin nanoparticle exerted time kill effect and post antifungal effect. When used in combination with fluconazole or nystatin, silver-curcumin nanoparticle revealed a minimum inhibitory concentration (MIC) decrease for both drugs used. In-depth molecular studies with silver-curcumin nanoparticle on C. albicans showed that silver-curcumin nanoparticle inhibited yeast to hyphae (Y-H) conversion. Further, SEM images of C. albicans showed that silver-curcumin nanoparticle caused membrane damage and inhibited biofilm formation. Biochemical evidence for membrane damage was confirmed by increased propidium iodide (PI) uptake in flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model, which mimics human tooth root canal infection. Confocal laser scanning microscopy studies showed eradication of C. albicans and reduction in colony forming unit (CFU) after 24 h treatment in the infected tooth samples in this model. Conclusion: The results of this study can pave the way for developing new antifungal agents with well deciphered mechanisms of action and can be a promising antifungal agent or medicament against root canal infection.

Keywords: C. albicans, ex vivo dentine model, inhibition of biofilm formation, root canal infection, yeast to hyphae conversion inhibition

Procedia PDF Downloads 208
678 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers

Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri

Abstract:

Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.

Keywords: blood cell count, mandatory testing, occupational exposure, radiation

Procedia PDF Downloads 461
677 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning

Authors: Tianqi Wu, Min Wang

Abstract:

Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.

Keywords: Construction learning, Corpus-based, Progressives, Prototype

Procedia PDF Downloads 128
676 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 230
675 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 305
674 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 203
673 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 410
672 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey

Authors: Rania S. Hussein

Abstract:

Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.

Keywords: levels of a product, popularity, travel, website evolution

Procedia PDF Downloads 321
671 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes

Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov

Abstract:

Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.

Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography

Procedia PDF Downloads 326
670 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland

Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen

Abstract:

Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.

Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection

Procedia PDF Downloads 217
669 Spatial Ecology of an Endangered Amphibian Litoria Raniformis within Modified Tasmanian Landscapes

Authors: Timothy Garvey, Don Driscoll

Abstract:

Within Tasmania, the growling grass frog (Litoria raniformis) has experienced a rapid contraction in distribution. This decline is primarily attributed to habitat loss through landscape modification and improved land drainage. Reductions in seasonal water-sources have placed increasing importance on permanent water bodies for reproduction and foraging. Tasmanian agricultural and commercial forestry landscapes often feature small artificial ponds, utilized for watering livestock and fighting wildfires. Improved knowledge of how L. raniformis may be exploiting anthropogenic ponds is required for improved conservation management. We implemented telemetric tracking in order to evaluate the spatial ecology of L. raniformis (n = 20) within agricultural and managed forestry sites, with tracking conducted periodically over the breeding season (November/December, January/February, March/April). We investigated (1) potential differences in habitat utilization between agricultural and plantation sites, and (2) the post-breeding dispersal of individual frogs. Frogs were found to remain in close proximity to ponds throughout November/December, with individuals occupying vegetative depauperate water bodies beginning to disperse by January/February. Dispersing individuals traversed exposed plantation understory and agricultural pasture land in order to enter patches of native scrubland. By March/April all individuals captured at minimally vegetated ponds had retreated to adjacent scrub corridors. Animals found in ponds featuring dense riparian vegetation were not recorded to disperse. No difference in behavior was recorded between sexes. Rising temperatures coincided with increased movement by individuals towards native scrub refugia. The patterns of movement reported in this investigation emphasize the significant contribution of manmade water-bodies towards the conservation of L. raniformis within modified landscapes. The use of natural scrubland as cyclical retreats between breeding seasons also highlights the importance of the continued preservation of remnant vegetation corridors. Loss of artificial dams or buffering scrubland in heavily altered landscapes could see the breakdown of the greater L. raniformis meta-population further threatening their regional persistence.

Keywords: habitat loss, modified landscapes, spatial ecology, telemetry

Procedia PDF Downloads 117
668 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 114
667 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania

Authors: Petru Urdea

Abstract:

As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.

Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania

Procedia PDF Downloads 26
666 Journal Bearing with Controllable Radial Clearance, Design and Analysis

Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi

Abstract:

The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.

Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal

Procedia PDF Downloads 283
665 The Influences of Facies and Fine Kaolinite Formation Migration on Sandstone's Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. ( present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Basin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets but also small disaggregated kaolinite platelets derived from the disaggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore but also coat some of the surrounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and reduce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on selected minerals observed during the SEM study were obtained through the use of an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats with limited occlusion by kaolinite. very fine-grained elemental combinations (Si/Al/Na/Cl, Si/Al Ca/Cl/Ti, and Qtz/Ti) have been identified and conformed by EDS analysis. However, the identification of the fine grained disaggregated material as mainly kaolinite though study area.

Keywords: pore throat, fine migration, formation damage, solids plugging, porosity loss

Procedia PDF Downloads 153
664 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience

Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore

Abstract:

INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.

Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)

Procedia PDF Downloads 71
663 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling

Authors: S. Agarwal, A. Rani

Abstract:

Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.

Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters

Procedia PDF Downloads 125
662 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon

Authors: Jeffrey A. Amelse

Abstract:

Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.

Keywords: carbon dioxide, net zero, sequestration, biomass, leaves

Procedia PDF Downloads 128
661 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions

Authors: Christian J. Kennett, Mark A. Baldwin

Abstract:

As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.

Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading

Procedia PDF Downloads 114
660 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 240
659 Evidence of Social Media Addiction and Problematic Internet Use Among High School and University Students in Cyprus

Authors: Costas Christodoulides

Abstract:

Excessive use of social networking sites (SNS) and the Internet by high school pupils and university students, can cause consequences similar to those observed in substance or gambling related addictions, negatively influence individual well-being notwithstanding self-assessments that people make about their life and experiences. The present study examined, for the first time in Cyprus, the levels of problematic use of the Social Media and the Internet among Cypriot pupils and students aiming at contributing to the discussion about the need for a more conclusive policy framework in the island. The Bergen Social Media Addiction Scale (BSMAS) and the Generalized Problematic Internet Use Scale 2 (GPIUS-2) were adapted to a Cypriot version and along with a sociodemographic questionnaire were introduced to a sample of 1059 young persons in order to respectively assess the addiction risk for Social Media Use and the risk of Problematic Internet Use. The sample consisted of 59% females, aged 15 to 35 (M=18.9 years, SD=3.20), 465 of them were high school students and 594 university students. Of 1059 respondents from 4 high-schools and 5 Universities (HEI) in Cyprus, 8.3% of the sample had BSMAS scores suggestive of addiction. Approximately a quarter of the sample (24%), demonstrated GPIUS-2 scores suggestive of high risk for problematic internet use. It is notable that differences seem to exist across gender with the score of the female population (11.1%) reaching levels of addiction to social media more than twice the level of addiction of the male population (4.3%). Also, the female population of high school students seems to be at the most vulnerable position for problematic internet use (28%). The 26% of the sample often or very often used some SNSs to forget of personal problems. The results of this study show that half of those surveyed used the Internet to feel better when they were upset or to escape the isolation they felt. Among the sample population, the study reports that 60% of the pupils and female university students are in agreement with the relevant statements. Conclusively, this study suggests that SNSs addiction levels among pupils and students in Cyprus ought to be an important public health concern. The same if not more alarming is the identified by the study prevalence of problematic Internet use among the same population. These results confirm international trends reported by scholarly research while also suggest that particular categories such as high school pupils and young females may be more exposed to problem SNSs and Internet use. Preventive strategies need first to acknowledge the problem in order to then formulate an effective strategy for prevention and intervention. For relevant authorities it is of primary importance to “exploit” the fact that high schools and universities can be seen as small communities and units potentially available for forging alliances for healthy Social Media and Internet use.

Keywords: problematic internet use, social media addiction, social networking sites, well-being

Procedia PDF Downloads 183
658 Communication Skills Training in Continuing Nursing Education: Enabling Nurses to Improve Competency and Performance in Communication

Authors: Marzieh Moattari Mitra Abbasi, Masoud Mousavinasab, Poorahmad

Abstract:

Background: Nurses in their daily practice need to communicate with patients and their families as well as health professional team members. Effective communication contributes to patients’ satisfaction which is a fundamental outcome of nursing practice. There are some evidences in support of patients' dissatisfaction with nurses’ performance in communication process. Therefore improving nurses’ communication skills is a necessity for nursing scholars and nursing administrators. Objective: The aim of the present study was to evaluate the effect of a 2-days workshop on nurses’ competencies and performances in communication in a central hospital located in the sought of Iran. Materials and Method: This is a randomized controlled trial which comprised of a convenient sample of 70 eligible nurses, working in a central hospital. They were randomly divided into 2 experimental and control groups. Nurses’ competencies was measured by an Objective Structured Clinical Examination (OSCE) and their performance was measured by asking eligible patients hospitalized in the nurses work setting during a one month period to evaluate nurses' communication skills before and 2 months after intervention. The experimental group participated in a 2 day workshop on communication skills. Content included in this workshop were: the importance of communication (verbal and non verbal), basic communication skills such as initiating the communication, active listening and questioning technique. Other subjects were patient teaching, problem solving, and decision making, cross cultural communication and breaking bad news. Appropriate teaching strategies such as brief didactic sessions, small group discussion and reflection were applied to enhance participants learning. The data was analyzed using SPSS 16. Result: A significant between group differences was found in nurses’ communication skills competencies and performances in the posttest. The mean scores of the experimental group was higher than that of the control group in the total score of OSCE as well as all stations of OSCE (p<0.003). Overall posttest mean scores of patient satisfaction with nurse's communication skills and all of its four dimensions significantly differed between the two groups of the study (p<0.001). Conclusion: This study shows that the education of nurses in communication skills, improves their competencies and performances. Measurement of Nurses’ communication skills as a central component of efficient nurse patient relationship by valid and reliable methods of evaluation is recommended. Also it is necessary to integrate teaching of communication skills in continuing nursing education programs. Trial Registration Number: IRCT201204042621N11

Keywords: communication skills, simulation, performance, competency, objective structure, clinical evaluation

Procedia PDF Downloads 218
657 Solutions of Thickening the Sludge from the Wastewater Treatment by a Rotor with Bars

Authors: Victorita Radulescu

Abstract:

Introduction: The sewage treatment plants, in the second stage, are formed by tanks having as main purpose the formation of the suspensions with high possible solid concentration values. The paper presents a solution to produce a rapid concentration of the slurry and sludge, having as main purpose the minimization as much as possible the size of the tanks. The solution is based on a rotor with bars, tested into two different areas of industrial activity: the remediation of the wastewater from the oil industry and, in the last year, into the mining industry. Basic Methods: It was designed, realized and tested a thickening system with vertical bars that manages to reduce sludge moisture content from 94% to 87%. The design was based on the hypothesis that the streamlines of the vortices detached from the rotor with vertical bars accelerate, under certain conditions, the sludge thickening. It is moved at the lateral sides, and in time, it became sediment. The formed vortices with the vertical axis in the viscous fluid, under the action of the lift, drag, weight, and inertia forces participate at a rapid aggregation of the particles thus accelerating the sludge concentration. Appears an interdependence between the Re number attached to the flow with vortex induced by the vertical bars and the size of the hydraulic compaction phenomenon, resulting from an accelerated process of sedimentation, therefore, a sludge thickening depending on the physic-chemical characteristics of the resulting sludge is projected the rotor's dimensions. Major findings/ Results: Based on the experimental measurements was performed the numerical simulation of the hydraulic rotor, as to assure the necessary vortices. The experimental measurements were performed to determine the optimal height and the density of the bars for the sludge thickening system, to assure the tanks dimensions as small as possible. The time thickening/settling was reduced by 24% compared to the conventional used systems. In the present, the thickeners intend to decrease the intermediate stage of water treatment, using primary and secondary settling; but they assume a quite long time, the order of 10-15 hours. By using this system, there are no intermediary steps; the thickening is done automatically when are created the vortices. Conclusions: The experimental tests were carried out in the wastewater treatment plant of the Refinery of oil from Brazi, near the city Ploiesti. The results prove its efficiency in reducing the time for compacting the sludge and the smaller humidity of the evacuated sediments. The utilization of this equipment is now extended and it is tested the mining industry, with significant results, in Lupeni mine, from the Jiu Valley.

Keywords: experimental tests, hydrodynamic modeling, rotor efficiency, wastewater treatment

Procedia PDF Downloads 118
656 Sustainability from Ecocity to Ecocampus: An Exploratory Study on Spanish Universities' Water Management

Authors: Leyla A. Sandoval Hamón, Fernando Casani

Abstract:

Sustainability has been integrated into the cities’ agenda due to the impact that they generate. The dimensions of greater proliferation of sustainability, which are taken as a reference, are economic, social and environmental. Thus, the decisions of management of the sustainable cities search a balance between these dimensions in order to provide environment-friendly alternatives. In this context, urban models (where water consumption, energy consumption, waste production, among others) that have emerged in harmony with the environment, are known as Ecocity. A similar model, but on a smaller scale, is ‘Ecocampus’ that is developed in universities (considered ‘small cities’ due to its complex structure). So, sustainable practices are being implemented in the management of university campus activities, following different relevant lines of work. The universities have a strategic role in society, and their activities can strengthen policies, strategies, and measures of sustainability, both internal and external to the organization. Because of their mission in knowledge creation and transfer, these institutions can promote and disseminate more advanced activities in sustainability. This model replica also implies challenges in the sustainable management of water, energy, waste, transportation, among others, inside the campus. The challenge that this paper focuses on is the water management, taking into account that the universities consume big amounts of this resource. The purpose of this paper is to analyze the sustainability experience, with emphasis on water management, of two different campuses belonging to two different Spanish universities - one urban campus in a historic city and the other a suburban campus in the outskirts of a large city. Both universities are in the top hundred of international rankings of sustainable universities. The methodology adopts a qualitative method based on the technique of in-depth interviews and focus-group discussions with administrative and academic staff of the ‘Ecocampus’ offices, the organizational units for sustainability management, from the two Spanish universities. The hypotheses indicate that sustainable policies in terms of water management are best in campuses without big green spaces and where the buildings are built or rebuilt with modern style. The sustainability efforts of the university are independent of the kind of (urban – suburban) campus but an important aspect to improve is the degree of awareness of the university community about water scarcity. In general, the paper suggests that higher institutions adapt their sustainability policies depending on the location and features of the campus and their engagement with the water conservation. Many Spanish universities have proposed policies, good practices, and measures of sustainability. In fact, some offices or centers of Ecocampus have been founded. The originality of this study is to learn from the different experiences of sustainability policies of universities.

Keywords: ecocampus, ecocity, sustainability, water management

Procedia PDF Downloads 221
655 Study of Ion Density Distribution and Sheath Thickness in Warm Electronegative Plasma

Authors: Rajat Dhawan, Hitendra K. Malik

Abstract:

Electronegative plasmas comprising electrons, positive ions, and negative ions are advantageous for their expanding applications in industries. In plasma cleaning, plasma etching, and plasma deposition process, electronegative plasmas are preferred because of relatively less potential developed on the surface of the material under investigation. Also, the presence of negative ions avoid the irregularity in etching shapes and also enhance the material working during the fabrication process. The interaction of metallic conducting surface with plasma becomes mandatory to understand these applications. A metallic conducting probe immersed in a plasma results in the formation of a thin layer of charged species around the probe called as a sheath. The density of the ions embedded on the surface of the material and the sheath thickness are the important parameters for the surface-plasma interaction. Sheath thickness will give rise to the information of affected plasma region due to conducting surface/probe. The knowledge of the density of ions in the sheath region is advantageous in plasma nitriding, and their temperature is equally important as it strongly influences the thickness of the modified layer during surface plasma interaction. In the present work, we considered a negatively biased metallic probe immersed in a warm electronegative plasma. For this system, we adopted the continuity equation and momentum transfer equation for both the positive and negative ions, whereas electrons are described by Boltzmann distribution. Finally, we use the Poisson’s equation. Here, we assumed the spherical geometry for small probe radius. Poisson’s equation reveals the behaviour of potential surrounding a conducting metallic probe along with the use of the continuity and momentum transfer equations, with the help of proper boundary conditions. In turn, it gives rise to the information about the density profile of charged species and most importantly the thickness of the sheath. By keeping in mind, the well-known Bohm-Sheath criterion, all calculations are done. We found that positive ion density decreases with an increase in positive ion temperature, whereas it increases with the higher temperature of the negative ions. Positive ion density decreases as we move away from the center of the probe and is found to show a discontinuity at a particular distance from the center of the probe. The distance where discontinuity occurs is designated as sheath edge, i.e., the point where sheath ends. These results are beneficial for industrial applications, as the density of ions embedded on material surface is strongly affected by the temperature of plasma species. It has a drastic influence on the surface properties, i.e., the hardness, corrosion resistance, etc. of the materials.

Keywords: electronegative plasmas, plasma surface interaction positive ion density, sheath thickness

Procedia PDF Downloads 133
654 A Narrative Inquiry of Identity Formation of Chinese Fashion Designers

Authors: Lily Ye

Abstract:

The contemporary fashion industry has witnessed the global rise of Chinese fashion designers. China plays more and more important role in this sector globally. One of the key debates in contemporary time is the conception of Chinese fashion. A close look at previous discussions on Chinese fashion reveals that most of them are explored through the lens of cultural knowledge and assumptions, using the dichotomous models of East and West. The results of these studies generate an essentialist and orientalist notion of Chinoiserie and Chinese fashion, which sees individual designers from China as undifferential collective members marked by a unique and fixed set of cultural scripts. This study challenges this essentialist conceptualization and brings fresh insights to the discussion of Chinese fashion identity against the backdrop of globalisation. Different from a culturalist approach to researching Chinese fashion, this paper presents an alternative position to address the research agenda through the mobilisation of Giddens’ (1991) theory of reflexive identity formation, privileging individuals’ agency and reflexivity. This approach to the discussion of identity formation not only challenges the traditional view seeing identity as the distinctive and essential characteristics belonging to any given individual or shared by all members of a particular social category or group but highlights fashion designers’ strategic agency and their role as fashion activist. This study draws evidence from a textual analysis of published stories of a group of established Chinese designers such as Guo Pei, Huishan Zhang, Masha Ma, Uma Wang, and Ma Ke. In line with Giddens’ concept of 'reflexive project of the self', this study uses a narrative methodology. Narratives are verbal accounts or stories relating to experiences of Chinese fashion designers. This approach offers the fashion designers a chance to 'speak' for themselves and show the depths and complexities of their experiences. It also emphasises the nuances of identity formation in fashion designers, whose experiences cannot be captured in neat typologies. Thematic analysis (Braun and Clarke, 2006) is adopted to identify and investigate common themes across the whole dataset. At the centre of the analysis is individuals’ self-articulation of their perceptions, experiences and themselves in relation to culture, fashion and identity. The finding indicates that identity is constructed around anchors such as agency, cultural hybridity, reflexivity and sustainability rather than traditional collective categories such as culture and ethnicity. Thus, the old East-West dichotomy is broken down, and essentialised social categories are challenged by the multiplicity and fragmentation of self and cultural hybridity created within designers’ 'small narratives'.

Keywords: Chinoiserie, fashion identity, fashion activism, narrative inquiry

Procedia PDF Downloads 293