Search results for: number of order
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21923

Search results for: number of order

2273 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 124
2272 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration

Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis

Abstract:

Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.

Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses

Procedia PDF Downloads 182
2271 Catalyst Assisted Microwave Plasma for NOx Formation

Authors: Babak Sadeghi, Rony Snyders, Marie-Paule.Delplancke-Ogletree

Abstract:

Nitrogen fixation (NF) is one of the crucial industrial processes. Many attempts have been made in order to artificially fix nitrogen, and among them, the Haber-Bosch’s (H-B) process is widely used. However, it presents two major drawbacks: huge fossil feedstock consumption and noticeable greenhouse gases emission. It is, therefore, necessary to develop alternatives. Plasma technology, as an inherent “green” technology, is considered to have a great potential for reducing the environmental impacts and improving the energy efficiency of the NF process. In this work, we have studied the catalyst assisted microwave plasma for NF application. Heterogeneous catalysts of MoO₃, with various loads 0, 5, 10, 20, and 30 wt%, supported on γ-alumina were prepared by conventional wet impregnation. Crystallinity, surface area, pore size, and microstructure were obtained by X-ray diffraction (XRD), Brunauer–Emmett–Teller (BET) adsorption isotherm, Scanning electron microscopy (SEM), and Transmission electron microscopy (TEM). The XRD patterns of calcined alumina confirm the γ- phase. Characteristic picks of MoO₃ could not be observed for low loads (< 20 wt%), likely indicating a high dispersion of metal oxide over the support. The specific surface area along with pores size are decreasing with increasing calcination temperature and MoO₃ loading. The MoO₃ loading does not modify the microstructure. TEM and SEM results for loading inferior to 20 wt% are coherent with a monolayer of MoO₃ on the support as proposed elsewhere. For loading of 20 wt% and more, TEM and Electron diffraction (ED) show nanocrystalline ₃-D MoO₃ particles. The catalytic performances of these catalysts were investigated in the post-discharge of a microwave plasma for NOx formation from N₂/O₂ mixtures. The plasma is sustained by a surface wave launched in a quartz tube via a surfaguide supplied by a 2.45 GHz microwave generator in pulse mode. In-situ identification and quantification of the products were carried out by Fourier-transform infrared spectroscopy (FTIR) in the post-discharge region. FTIR analysis of the exhausted gas reveal NO and NO₂ bands in presence of catalyst while only NO band were assigned without catalyst. On the other hand, in presence of catalyst, a 10% increase of NOₓ formation and of 20% increase in energy efficiency are observed.

Keywords: γ-Al2O₃-MoO₃, µ-waveplasma, N2 fixation, Plasma-catalysis, Plasma diagnostic

Procedia PDF Downloads 176
2270 Inclusive Design for Regaining Lost Identity: Accessible, Aesthetic and Effortless Clothing

Authors: S. Tandon, A. Oussoren

Abstract:

Clothing is a need for all humans. Besides serving the commonly understood function of protection, it also is a means of self-expression and adornment. However, most clothing for people with disabilities is developed to respond to their functional needs merely. Such clothing aggravates feelings of inadequacy and lowers their self-esteem. Investigations into apparel-related barriers faced by women with disabilities and their expectations and desires about clothing pointed to a huge void in terms of well-designed inclusive clothing. The incredible stories and experiences shared by the participants in this research highlighted the fact that people with disabilities wanted to feel, dress, and look at how they wanted to look by wearing what they wanted to wear. Clothing should be about self-expression – reflecting their moods, taste, and style and not limited to fulfilling merely their functional needs. Inclusive Design for Regaining Lost Identity was undertaken to design and develop accessible clothing that is inclusive and fashionable to foster psycho-social well-being and to enhance the self-esteem of women with disabilities. The research explored inclusive design solutions for the saree – a traditional Indian garment for women. The saree is an elaborate garment that requires precise draping, which makes the saree complicated to wear and inconvenient to carry, particularly for women with physical disabilities. For many women in India, the saree remains the customary dress, especially for work and occasions, yet minimal advancement has been made to enhance its accessibility and ease of use. The project followed a qualitative research approach whilst incorporating a combination of methods, which consisted of a questionnaire, an interview, and co-creation workshops. The research adhered to the principles of applied research such that the designed products aim to solve a problem that is functional and purposeful. In order to reduce the complications and to simplify the wrapping of the garment fabric around the body, different combinations of pre-stitching of the layers of the saree were created to investigate the outcomes. The technology of 3D drawing and printing was employed to develop feasible fasteners keeping in mind the participants’ movement limitations and to enhance their agency with these newly designed fasteners. The underlying principle of the project is that every individual should be able to access life the way they wish to and should not have to compromise their desires due to their disability.

Keywords: accessibility, co-creation, design ethics, inclusive

Procedia PDF Downloads 114
2269 The Emancipation of the Inland Areas Between Depopulation, Smart Community and Living Labs: A Case Study of Sardinia

Authors: Daniela Pisu

Abstract:

The paper deals with the issue of territorial inequalities focused on the gap of the marginalization of inland areas with respect to the centrality of urban centers as they are subjected to an almost unstoppable demographic hemorrhage in a context marked by the tendency to depopulation such as the Sardinian territory, to which are added further and intense phenomena of de-anthropization. The research question is aimed at exploring the functionality of the interventions envisaged by the Piano Nazionale Ripresa Resilienza for the reduction of territorial imbalances in these areas to the extent that it is possible to identify policy strategies aimed at increasing the relational expertise of citizenship, functional to the consolidation of results in a long-term perspective. In order to answer this question, the qualitative case study on the Municipality of Ulàssai (province of Nuoro) is highlighted as the only winner on the island, with the Pilot Project ‘Where nature meets art’, intended for the cultural and social regeneration of small towns. The main findings, which emerged from the analysis of institutional sources and secondary data, highlight the socio-demographic fragility of the territory in the face of the active institutional commitment to make Ulàssai a smart community, starting from the enhancement of natural resources and the artistic heritage of fellow citizen Maria Lai. The findings drawn from the inspections and focus groups with the youth population present the aforementioned project as a generative opportunity for both the economic and social fabric, leveraging the public debates of the living labs, where the process of public communication becomes the main vector for the exercise of the rights of participatory democracy. The qualitative lunge leads to the conclusion that the repercussions envisaged by the PNRR in internal areas will be able to show their self-sustainable effect through colloquial administrations such as that of Ulàssai, capable of seeing in the interactive paradigm of public communication that natural process with which to reduce that historical sense of extraneousness attributed to the institution-citizenship relationship.

Keywords: social labs, smart community, depopulation, Sardinia, Piano Nazionale di Ripresa e Resilienza

Procedia PDF Downloads 40
2268 Train-The-Trainer in Neonatal Resuscitation in Rural Uganda: A Model for Sustainability and the Barriers Faced

Authors: Emilia K. H. Danielsson-Waters, Malaz Elsaddig, Kevin Jones

Abstract:

Unfortunately, it is well known that neonatal deaths are a common and potentially preventable occurrence across the world. Neonatal resuscitation is a simple and inexpensive intervention that can effectively reduce this rate, and can be taught and implemented globally. This project is a follow-on from one in 2012, which found that neonatal resuscitation simulation was valuable for education, but would be better improved by being delivered by local staff. Methods: This study involved auditing the neonatal admission and death records within a rural Ugandan hospital, alongside implementing a Train-The-Trainer teaching scheme to teach Neonatal Resuscitation. One local doctor was trained for simulating neonatal resuscitation, whom subsequently taught an additional 14 staff members in one-afternoon session. Participants were asked to complete questionnaires to assess their knowledge and confidence pre- and post-simulation, and a survey to identify barriers and drivers to simulation. Results: The results found that the neonatal mortality rate in this hospital was 25% between July 2016- July 2017, with birth asphyxia, prematurity and sepsis being the most common causes. Barriers to simulation that were identified predominantly included a lack of time, facilities and opportunity, yet all members stated simulation was beneficial for improving skills and confidence. The simulation session received incredibly positive qualitative feedback, and also a 0.58-point increase in knowledge (p=0.197) and 0.73-point increase in confidence (0.079). Conclusion: This research shows that it is possible to create a teaching scheme in a rural hospital, however, many barriers are in place for its sustainability, and a larger sample size with a more sensitive scale is required to achieve statistical significance. This is undeniably important, because teaching neonatal resuscitation can have a direct impact on neonatal mortality. Subsequently, recommendations include that efforts should be put in place to create a sustainable training scheme, for example, by employing a resuscitation officer. Moreover, neonatal resuscitation teaching should be conducted more frequently in hospitals, and conducted in a wider geographical context, including within the community, in order to achieve its full effect.

Keywords: neonatal resuscitation, sustainable medical education, train-the-trainer, Uganda

Procedia PDF Downloads 149
2267 Comparison of Microbiological Assessment of Non-adhesive Use and the Use of Adhesive on Complete Dentures

Authors: Hyvee Gean Cabuso, Arvin Taruc, Danielle Villanueva, Channela Anais Hipolito, Jia Bianca Alfonso

Abstract:

Introduction: Denture adhesive aids to provide additional retention, support and comfort for patients with loose dentures, as well as for patients who seek to achieve optimal denture adhesion. But due to its growing popularity, arising oral health issues should be considered, including its possible impact that may alter the microbiological condition of the denture. Changes as such may further resolve to denture-related oral diseases that can affect the day-to-day lives of patients. Purpose: The study aims to assess and compare the microbiological status of dentures without adhesives versus dentures when adhesives were applied. The study also intends to identify the presence of specific microorganisms, their colony concentration and their possible effects on the oral microflora. This study also aims to educate subjects by introducing an alternative denture cleaning method as well as denture and oral health care. Methodology: Edentulous subjects age 50-80 years old, both physically and medically fit, were selected to participate. Before obtaining samples for the study, the alternative cleaning method was introduced by demonstrating a step-by-step cleaning process. Samples were obtained by swabbing the intaglio surface of their upper and lower prosthesis. These swabs were placed in a thioglycollate broth, which served as a transport and enrichment medium. The swabs were then processed through bacterial culture. The colony-forming units (CFUs) were calculated on MacConkey Agar Plate (MAP) and Blood Agar Plate (BAP) in order to identify and assess the microbiological status, including species identification and microbial counting. Result: Upon evaluation and analysis of collected data, the microbiological assessment of the upper dentures with adhesives showed little to no difference compared to dentures without adhesives, but for the lower dentures, (P=0.005), which is less than α = 0.05; therefore, the researchers reject (Ho) and that there is a significant difference between the mean ranks of the lower denture without adhesive to those with, implying that there is a significant decrease in the bacterial count. Conclusion: These results findings may implicate the possibility that the addition of denture adhesives may contribute to the significant decrease of microbial colonization on the dentures.

Keywords: denture, denture adhesive, denture-related, microbiological assessment

Procedia PDF Downloads 128
2266 Positivity Rate of Person under Surveillance among Institut Jantung Negara’s Patients with Various Vaccination Statuses in the First Quarter of 2022, Malaysia

Authors: Mohd Izzat Md. Nor, Norfazlina Jaffar, Noor Zaitulakma Md. Zain, Nur Izyanti Mohd Suppian, Subhashini Balakrishnan, Geetha Kandavello

Abstract:

During the Coronavirus (COVID-19) pandemic, Malaysia has been focusing on building herd immunity by introducing vaccination programs into the community. Hospital Standard Operating Procedures (SOP) were developed to prevent inpatient transmission. Objective: In this study, we focus on the positivity rate of inpatient Person Under Surveillance (PUS) becoming COVID-19 positive and compare this to the National rate in order to see the outcomes of the patient who becomes COVID-19 positive in relation to their vaccination status. Methodology: This is a retrospective observational study carried out from 1 January until 30 March 2022 in Institut Jantung Negara (IJN). There were 5,255 patients admitted during the time of this study. Pre-admission Polymerase Chain Reaction (PCR) swab was done for all patients. Patients with positive PCR on pre-admission screening were excluded. The patient who had exposure to COVID-19-positive staff or patients during hospitalization was defined as PUS and were quarantined and monitored for potential COVID-19 infection. Their frequency and risk of exposure (WHO definition) were recorded. A repeat PCR swab was done for PUS patients that have clinical deterioration with or without COVID symptoms and on their last day of quarantine. The severity of COVID-19 infection was defined as category 1-5A. All patients' vaccination status was recorded, and they were divided into three groups: fully immunised, partially immunised, and unvaccinated. We analyzed the positivity rate of PUS patients becoming COVID-positive, outcomes, and correlation with the vaccination status. Result: Total inpatient PUS to patients and staff was 492; only 13 became positive, giving a positivity rate of 2.6%. Eight (62%) had multiple exposures. The majority, 8/13(72.7%), had a high-risk exposure, and the remaining 5 had medium-risk exposure. Four (30.8%) were boostered, 7(53.8%) were fully vaccinated, and 2(15.4%) were partial/unvaccinated. Eight patients were in categories 1-2, whilst 38% were in categories 3-5. Vaccination status did not correlate with COVID-19 Category (P=0.641). One (7.7%) patient died due to COVID-19 complications and sepsis. Conclusion: Within the first quarter of 2022, our institution's positivity rate (2.6%) is significantly lower than the country's (14.4%). High-risk exposure and multiple exposures to positive COVID-19 cases increased the risk of PUS becoming COVID-19 positive despite their underlying vaccination status.

Keywords: COVID-19, boostered, high risk, Malaysia, quarantine, vaccination status

Procedia PDF Downloads 88
2265 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 465
2264 The Dynamics of a Droplet Spreading on a Steel Surface

Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov

Abstract:

Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.

Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading

Procedia PDF Downloads 330
2263 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 100
2262 Reduction of Nitrogen Monoxide with Carbon Monoxide from Gas Streams by 10% wt. Cu-Ce-Fe-Co/Activated Carbon

Authors: K. L. Pan, M. B. Chang

Abstract:

Nitrogen oxides (NOₓ) is regarded as one of the most important air pollutants. It not only causes adverse environmental effects but also harms human lungs and respiratory system. As a post-combustion treatment, selective catalytic reduction (SCR) possess the highest NO removal efficiency ( ≥ 85%), which is considered as the most effective technique for removing NO from gas streams. However, injection of reducing agent such as NH₃ is requested, and it is costly and may cause secondary pollution. Reduction of NO with carbon monoxide (CO) as reducing agent has been previously investigated. In this process, the key step involves the NO adsorption and dissociation. Also, the high performance mainly relies on the amounts of oxygen vacancy on catalyst surface and redox ability of catalyst, because oxygen vacancy can activate the N-O bond to promote its dissociation. Additionally, perfect redox ability can promote the adsorption of NO and oxidation of CO. Typically, noble metals such as iridium (Ir), platinum (Pt), and palladium (Pd) are used as catalyst for the reduction of NO with CO; however, high cost has limited their applications. Recently, transition metal oxides have been investigated for the reduction of NO with CO, especially CuₓOy, CoₓOy, Fe₂O₃, and MnOₓ are considered as effective catalysts. However, deactivation is inevitable as oxygen (O₂) exists in the gas streams because active sites (oxygen vacancies) of catalyst are occupied by O₂. In this study, Cu-Ce-Fe-Co is prepared and supported on activated carbon by impregnation method to form 10% wt. Cu-Ce-Fe-Co/activated carbon catalyst. Generally, addition of activated carbon on catalyst can bring several advantages: (1) NO can be effectively adsorbed by interaction between catalyst and activated carbon, resulting in the improvement of NO removal, (2) direct NO decomposition may be achieved over carbon associated with catalyst, and (3) reduction of NO could be enhanced by a reducing agent over carbon-supported catalyst. Therefore, 10% wt. Cu-Ce-Fe-Co/activated carbon may have better performance for reduction of NO with CO. Experimental results indicate that NO conversion achieved with 10% wt. Cu-Ce-Fe-Co/activated carbon reaches 83% at 150°C with 300 ppm NO and 10,000 ppm CO. As temperature is further increased to 200°C, 100% NO conversion could be achieved, implying that 10% wt. Cu-Ce-Fe-Co/activated carbon prepared has good activity for the reduction of NO with CO. In order to investigate the effect of O₂ on reduction of NO with CO, 1-5% O₂ are introduced into the system. The results indicate that NO conversions still maintain at ≥ 90% with 1-5% O₂ conditions at 200°C. It is worth noting that effect of O₂ on reduction of NO with CO could be significantly improved as carbon is used as support. It is inferred that carbon support can react with O₂ to produce CO₂ as O₂ exists in the gas streams. Overall, 10% wt. Cu-Ce-Fe-Co/activated carbon is demonstrated with good potential for reduction of NO with CO, and possible mechanisms will be elucidated in this paper.

Keywords: nitrogen oxides (NOₓ), carbon monoxide (CO), reduction of NO with CO, carbon material, catalysis

Procedia PDF Downloads 256
2261 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow

Authors: Sudipto Sarkar, Anamika Paul

Abstract:

Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio

Procedia PDF Downloads 111
2260 Qualitative Modeling of Transforming Growth Factor Beta-Associated Biological Regulatory Network: Insight into Renal Fibrosis

Authors: Ayesha Waqar Khan, Mariam Altaf, Jamil Ahmad, Shaheen Shahzad

Abstract:

Kidney fibrosis is an anticipated outcome of possibly all types of progressive chronic kidney disease (CKD). Epithelial-mesenchymal transition (EMT) signaling pathway is responsible for production of matrix-producing fibroblasts and myofibroblasts in diseased kidney. In this study, a discrete model of TGF-beta (transforming growth factor) and CTGF (connective tissue growth factor) was constructed using Rene Thomas formalism to investigate renal fibrosis turn over. The kinetic logic proposed by Rene Thomas is a renowned approach for modeling of Biological Regulatory Networks (BRNs). This modeling approach uses a set of constraints which represents the dynamics of the BRN thus analyzing the pathway and predicting critical trajectories that lead to a normal or diseased state. The molecular connection between TGF-beta, Smad 2/3 (transcription factor) phosphorylation and CTGF is modeled using GenoTech. The order of BRN is CTGF, TGF-B, and SMAD3 respectively. The predicted cycle depicts activation of TGF-B (TGF-β) via cleavage of its own pro-domain (0,1,0) and presentation to TGFR-II receptor phosphorylating SMAD3 (Smad2/3) in the state (0,1,1). Later TGF-B is turned off (0,0,1) thereby activating SMAD3 that further stimulates the expression of CTGF in the state (1,0,1) and itself turns off in (1,0,0). Elevated CTGF expression reactivates TGF-B (1,1,0) and the cycle continues. The predicted model has generated one cycle and two steady states. Cyclic behavior in this study represents the diseased state in which all three proteins contribute to renal fibrosis. The proposed model is in accordance with the experimental findings of the existing diseased state. Extended cycle results in enhanced CTGF expression through Smad2/3 and Smad4 translocation in the nucleus. The results suggest that the system converges towards organ fibrogenesis if CTGF remains constructively active along with Smad2/3 and Smad 4 that plays an important role in kidney fibrosis. Therefore, modeling regulatory pathways of kidney fibrosis will escort to the progress of therapeutic tools and real-world useful applications such as predictive and preventive medicine.

Keywords: CTGF, renal fibrosis signaling pathway, system biology, qualitative modeling

Procedia PDF Downloads 179
2259 Evaluating Impact of Teacher Professional Development Program on Students’ Learning

Authors: S. C. Lin, W. W. Cheng, M. S. Wu

Abstract:

This study attempted to investigate the connection between teacher professional development program and students’ Learning. This study took Readers’ Theater Teaching Program (RTTP) for professional development as an example to inquiry how participants apply their new knowledge and skills learned from RTTP to their teaching practice and how the impact influence students learning. The goals of the RTTP included: 1) to enhance teachers RT content knowledge; 2) to implement RT instruction in teachers’ classrooms in response to their professional development. 2) to improve students’ ability of reading fluency in professional development teachers’ classrooms. This study was a two-year project. The researchers applied mixed methods to conduct this study including qualitative inquiry and one-group pretest-posttest experimental design. In the first year, this study focused on designing and implementing RTTP and evaluating participants’ satisfaction of RTTP, what they learned and how they applied it to design their English reading curriculum. In the second year, the study adopted quasi-experimental design approach and evaluated how participants RT instruction influenced their students’ learning, including English knowledge, skill, and attitudes. The participants in this study composed two junior high school English teachers and their students. Data were collected from a number of different sources including teaching observation, semi-structured interviews, teaching diary, teachers’ professional development portfolio, Pre/post RT content knowledge tests, teacher survey, and students’ reading fluency tests. To analyze the data, both qualitative and quantitative data analysis were used. Qualitative data analysis included three stages: organizing data, coding data, and analyzing and interpreting data. Quantitative data analysis included descriptive analysis. The results indicated that average percentage of correct on pre-tests in RT content knowledge assessment was 40.75% with two teachers ranging in prior knowledge from 35% to 46% in specific RT content. Post-test RT content scores ranged from 70% to 82% correct with an average score of 76.50%. That gives teachers an average gain of 35.75% in overall content knowledge as measured by these pre/post exams. Teachers’ pre-test scores were lowest in script writing and highest in performing. Script writing was also the content area that showed the highest gains in content knowledge. Moreover, participants hold a positive attitude toward RTTP. They recommended that the approach of professional learning community, which was applied in RTTP was benefit to their professional development. Participants also applied the new skills and knowledge which they learned from RTTP to their practices. The evidences from this study indicated that RT English instruction significantly influenced students’ reading fluency and classroom climate. The result indicated that all of the experimental group students had a big progress in reading fluency after RT instruction. The study also found out several obstacles. Suggestions were also made.

Keywords: teacher’s professional development, program evaluation, readers’ theater, english reading instruction, english reading fluency

Procedia PDF Downloads 398
2258 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania

Authors: Japhet N. Mwambusi

Abstract:

High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.

Keywords: climate change, deforestation, gluing technology, joint failure, wood-glue, wood species

Procedia PDF Downloads 240
2257 Investigation of Dry-Blanching and Freezing Methods of Fruits

Authors: Epameinondas Xanthakis, Erik Kaunisto, Alain Le-Bail, Lilia Ahrné

Abstract:

Fruits and vegetables are characterized as perishable food matrices due to their short shelf life as several deterioration mechanisms are being involved. Prior to the common preservation methods like freezing or canning, fruits and vegetables are being blanched in order to inactivate deteriorative enzymes. Both conventional blanching pretreatments and conventional freezing methods hide drawbacks behind their beneficial impacts on the preservation of those matrices. Conventional blanching methods may require longer processing times, leaching of minerals and nutrients due to the contact with the warm water which in turn leads to effluent production with large BOD. An important issue of freezing technologies is the size of the formed ice crystals which is also critical for the final quality of the frozen food as it can cause irreversible damage to the cellular structure and subsequently to degrade the texture and the colour of the product. Herein, the developed microwave blanching methodology and the results regarding quality aspects and enzyme inactivation will be presented. Moreover, heat transfer phenomena, mass balance, temperature distribution, and enzyme inactivation (such as Pectin Methyl Esterase and Ascorbic Acid Oxidase) of our microwave blanching approach will be evaluated based on measurements and computer modelling. The present work is part of the COLDμWAVE project which aims to the development of an innovative environmentally sustainable process for blanching and freezing of fruits and vegetables with improved textural and nutritional quality. In this context, COLDµWAVE will develop tailored equipment for MW blanching of vegetables that has very high energy efficiency and no water consumption. Furthermore, the next steps of this project regarding the development of innovative pathways in MW assisted freezing to improve the quality of frozen vegetables, by exploring in depth previous results acquired by the authors, will be presented. The application of MW assisted freezing process on fruits and vegetables it is expected to lead to improved quality characteristics compared to the conventional freezing. Acknowledgments: COLDμWAVE has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grand agreement No 660067.

Keywords: blanching, freezing, fruits, microwave blanching, microwave

Procedia PDF Downloads 267
2256 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment

Procedia PDF Downloads 112
2255 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 93
2254 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 218
2253 Reconstruction of Performace-Based Budgeting in Indonesian Local Government: Application of Soft Systems Methodology in Producing Guideline for Policy Implementation

Authors: Deddi Nordiawan

Abstract:

Effective public policy creation required a strong budget system, both in terms of design and implementation. Performance-based Budget is an evolutionary approach with two substantial characteristics; first, the strong integration between budgeting and planning, and second, its existence as guidance so that all activities and expenditures refer to measurable performance targets. There are four processes in the government that should be followed in order to make the budget become performance-based. These four processes consist of the preparation of a vision according to the bold aspiration, the formulation of outcome, the determination of output based on the analysis of organizational resources, and the formulation of Value Creation Map that contains a series of programs and activities. This is consistent with the concept of logic model which revealed that the budget performance should be placed within a relational framework of resources, activities, outputs, outcomes and impacts. Through the issuance of Law 17/2003 regarding State Finance, local governments in Indonesia have to implement performance-based budget. Central Government then issued Government Regulation 58/2005 which contains the detail guidelines how to prepare local governments budget. After a decade, implementation of performance budgeting in local government is still not fully meet expectations, though the guidance is completed, socialization routinely performed, and trainings have also been carried out at all levels. Accordingly, this study views the practice of performance-based budget at local governments as a problematic situation. This condition must be approached with a system approach that allows the solutions from many point of views. Based on the fact that the infrastructure of budgeting has already settled, the study then considering the situation as complexity. Therefore, the intervention needs to be done in the area of human activity system. Using Soft Systems Methodology, this research will reconstruct the process of performance-based budget at local governments is area of human activity system. Through conceptual models, this study will invite all actors (central government, local government, and the parliament) for dialogue and formulate interventions in human activity systems that systematically desirable and culturally feasible. The result will direct central government in revise the guidance to local government budgeting process as well as a reference to build the capacity building strategy.

Keywords: soft systems methodology, performance-based budgeting, Indonesia, public policy

Procedia PDF Downloads 252
2252 Screens Design and Application for Sustainable Buildings

Authors: Fida Isam Abdulhafiz

Abstract:

Traditional vernacular architecture in the United Arab Emirates constituted namely of adobe houses with a limited number of openings in their facades. The thick mud and rubble walls and wooden window screens protected its inhabitants from the harsh desert climate and provided them with privacy and fulfilled their comfort zone needs to an extent. However, with the rise of the immediate post petroleum era reinforced concrete villas with glass and steel technology has replaced traditional vernacular dwellings. And more load was put on the mechanical cooling systems to ensure the satisfaction of today’s more demanding doweling inhabitants. However, In the early 21at century professionals started to pay more attention to the carbon footprint caused by the built constructions. In addition, many studies and innovative approaches are now dedicated to lower the impact of the existing operating buildings on their surrounding environments. The UAE government agencies started to regulate that aim to revive sustainable and environmental design through Local and international building codes and urban design policies such as Estidama and LEED. The focus in this paper is on the reduction of the emissions resulting from the use of energy sources in the cooling and heating systems, and that would be through using innovative screen designs and façade solutions to provide a green footprint and aesthetic architectural icons. Screens are one of the popular innovative techniques that can be added in the design process or used in existing building as a renovation techniques to develop a passive green buildings. Preparing future architects to understand the importance of environmental design was attempted through physical modelling of window screens as an educational means to combine theory with a hands on teaching approach. Designing screens proved to be a popular technique that helped them understand the importance of sustainable design and passive cooling. After creating models of prototype screens, several tests were conducted to calculate the amount of Sun, light and wind that goes through the screens affecting the heat load and light entering the building. Theory further explored concepts of green buildings and material that produce low carbon emissions. This paper highlights the importance of hands on experience for student architects and how physical modelling helped rise eco awareness in Design studio. The paper will study different types of façade screens and shading devices developed by Architecture students and explains the production of diverse patterns for traditional screens by student architects based on sustainable design concept that works properly with the climate requirements in the Middle East region.

Keywords: building’s screens modeling, façade design, sustainable architecture, sustainable dwellings, sustainable education

Procedia PDF Downloads 298
2251 Groundwater Flow Dynamics in Shallow Coastal Plain Sands Aquifer, Abesan Area, Eastern Dahomey Basin, Southwestern Nigeria

Authors: Anne Joseph, Yinusa Asiwaju-Bello, Oluwaseun Olabode

Abstract:

Sustainable administration of groundwater resources tapped in Coastal Plain Sands aquifer in Abesan area, Eastern Dahomey Basin, Southwestern Nigeria necessitates the knowledge of the pattern of groundwater flow in meeting a suitable environmental need for habitation. Thirty hand-dug wells were identified and evaluated to study the groundwater flow dynamics and anionic species distribution in the study area. Topography and water table levels method with the aid of Surfer were adopted in the identification of recharge and discharge zones where six recharge and discharge zones were delineated correspondingly. Dissolved anionic species of HCO3-, Cl-, SO42-and NO3- were determined using titrimetric and spectrophotometric method. The trend of significant anionic concentrations of groundwater samples are in the order Cl- > HCO3-> SO42- > NO3-. The prominent anions in the discharge and recharge area are Cl- and HCO3- ranging from 0.22ppm to 3.67ppm and 2.59ppm to 0.72ppm respectively. Analysis of groundwater head distribution and the groundwater flow vector in Abesan area confirmed that Cl- concentration is higher than HCO3- concentration in recharge zones. Conversely, there is a high concentration of HCO3- than Cl- inland towards the continent; therefore, HCO3-concentration in the discharge zones is higher than the Cl- concentration. The anions were to be closely related to the recharge and discharge areas which were confirmed by comparison of activities such as rainfall regime and anthropogenic activities in Abesan area. A large percentage of the samples showed that HCO3-, Cl-, SO42-and NO3- falls within the permissible limit of the W.H.O standard. Most of the samples revealed Cl- / (CO3- + HCO3-) ratio higher than 0.5 indicating that there is saltwater intrusion imprints in the groundwater of the study area. Gibbs plot shown that most of the samples is from rock dominance, some from evaporation dominance and few from precipitation dominance. Potential salinity and SO42/ Cl- ratios signifies that most of the groundwater in Abesan is saline and falls in a water class found to be insuitable for irrigation. Continuous dissolution of these anionic species may pose a significant threat to the inhabitants of Abesan area in the nearest future.

Keywords: Abessan, Anionic species, Discharge, Groundwater flow, Recharge

Procedia PDF Downloads 124
2250 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 81
2249 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 80
2248 The ‘Fun, Move, Play’ Project: Qualitative and Quantitative Findings from Irish Primary School Children (6-8 Years), Parents and Teachers

Authors: Jemma McGourty, Brid Delahunt, Fiona Hackett, Sharon Courtney, Richard English, Graham Russell, Sinéad O’Connor

Abstract:

Fundamental Movement Skills (FMS) mastery is considered essential for children’s ongoing, meaningful engagement in Physical Activity (PA). There has been a dearth of Irish research on baseline FMS and their development by means of intervention in young primary school children. In addition, as children’s participation in PA is heavily influenced by both parents and teachers, it is imperative to understand their attitudes and perceptions towards PA participation and its’ promotion in children. The ‘Fun, Move, Play’ Project investigated the effect of a 6-week play based PA intervention on primary school children’s (aged 6-8 years) FMS while also exploring the attitudes and perceptions of their parents and teachers towards PA participation. The FMS intervention utilised a pre-post quasi-experimental design to determine the effect of a 6-week play based PA intervention (devised from the iCoach Kids Programme) on 176 primary school children’s FMS (N = 176: 90 girls and 86 boys; M = 7.2 years; SD = 0.48). Objective measures of 7 FMS (run, skip, vertical jump, static balance, stationary dribble, catch, kick) were made using a combination of the TGMD2 and Get Skilled, Get Active resources. One hundred parents (87 mothers; 13 fathers; M=36 years; SD=5.45) and 90 teachers (67 females; 23 males) completed surveys investigating their attitudes and perceptions towards PA participation. In addition, 19 of these parents and 9 of these teachers participated in semi-structured qualitative interviews to explore, in more depth, their views and perceptions of PA participation. Both the FMS data set and survey responses were analysed using SPSS version 23, using appropriate statistical analysis. A thematic analysis framework was used to analyse the qualitative findings. A significant improvement was observed in the children’s overall FMS score pre-post intervention (t = 16.67; df = 175; p < 0.001), while there were also significant improvements in each of the seven individual FMS measured in the children, pre-post intervention. Findings from the parent surveys and interviews indicated that parents had positive attitudes towards PA, viewed it as important and supported their child’s PA participation. However, a lack of knowledge regarding the amount and intensity of PA that children should participate in emerged as a recurrent finding. Also, there was a significant positive correlation between the PA levels of parents’ and their children (r = .41; n = 100; p < .001). Arising from the teachers’ surveys and interviews was a positive attitude towards PA and the impact that it has on a child’s health and well-being. They also reported feeling more confident teaching certain aspects of the PE curriculum (games and sports) compared to others (gymnastics, dance), where they appreciate working with specialist practitioners. Conclusion: A short-term PA intervention has a positive effect on children’s FMS. While parents are supportive of their child’s PA participation, there is a knowledge gap regarding National PA guidelines for children. Teachers appreciate the importance of PA in children, but face a number of challenges in its implementation and promotion.

Keywords: fundamental movement skills, parents attitudes to physical activity, short-term intervention, teachers attitudes to physical activity

Procedia PDF Downloads 179
2247 Pioneering Conservation of Aquatic Ecosystems under Australian Law

Authors: Gina M. Newton

Abstract:

Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.

Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species

Procedia PDF Downloads 132
2246 Increasing the Competitiveness of Batik Products as a Ready-To-Wear Cash Material Through Patterned Batik Innovation with Quilting Technique, at Klampar Batik Tourism Village

Authors: Urip Wahyuningsih, Indarti, Yuhri Inang Prihatina

Abstract:

The current development of batik art has given rise to various batik industries. The emergence of the batik industry is in order to meet the needs of the increasing share of the batik fashion market. This gives rise to competitiveness between the batik industry to compete for a share of the existing batik clothing market. Conditions like this also occur in Klampar Pamekasan Maduira Village, as one of the Batik Tourism Villages in Indonesia, it must continue to improve by trying to maintain the characteristics of Klampar Pamekasan Madura batik fashion and must also always innovate so that it remains highly competitive so that it remains one of the places popular batik tourist destination. Ready-to-wear or ready-to-wear clothing is clothing that is mass produced and produced in various sizes and colors, which can be purchased directly and worn easily. Patterned batik cloth is basically batik cloth that has the pattern lines of the clothing parts arranged efficiently, so there is no need to bother designing the pattern layout of the clothing parts on the batik cloth to be cut. Quilting can be defined as the art of combining fabric materials of certain sizes and cuts to form unique motifs. Based on several things above, breakthrough production innovation is needed without abandoning the characteristic of Klampar Pamekasan Madura Batik as one of the Batik Tourism Villages in Indonesia. One innovation that can be done is creating ready-to-wear patterned batik clothing products using a quilting technique. The method used in this research is the Double Diamond Design Process method. This method is divided into 4 phases namely, discover (namely the stage of designing the theme of the ready-to-wear patterned batik fashion innovation concept using quilting techniques in the Batik Village, Klampar Village, Pamekasasan, Madura), define (determine the design summary and present challenges to the design), develop ( presents prototypes developed, tested, reviewed and refined) and deliver (selected designs are produced, pass final tests and are ready to be commercialized). The research produces patterned batik products that are ready to wear with quilting techniques that are validated by experts and accepted by the public.

Keywords: competitiveness, ready to wear, innovation, quilting, klampar batik vllage

Procedia PDF Downloads 49
2245 Factors Associated with Death during Tuberculosis Treatment of Patients Co-Infected with HIV at a Tertiary Care Setting in Cameroon: An 8-Year Hospital-Based Retrospective Cohort Study (2006-2013)

Authors: A. A. Agbor, Jean Joel R. Bigna, Serges Clotaire Billong, Mathurin Cyrille Tejiokem, Gabriel L. Ekali, Claudia S. Plottel, Jean Jacques N. Noubiap, Hortence Abessolo, Roselyne Toby, Sinata Koulla-Shiro

Abstract:

Background: Contributors to fatal outcomes in patients undergoing tuberculosis (TB) treatment in the setting of HIV co-infection are poorly characterized, especially in sub-Saharan Africa. Our study’s aim was to assess factors associated with death in TB/HIV co-infected patients during the first 6 months their TB treatment. Methods: We conducted a tertiary-care hospital-based retrospective cohort study from January 2006 to December 2013 at the Yaoundé Central Hospital, Cameroon. We reviewed medical records to identify hospitalized co-infected TB/HIV patients aged 15 years and older. Death was defined as any death occurring during TB treatment, as per the World Health Organization’s recommendations. Logistic regression analysis identified factors associated with death. Magnitudes of associations were expressed by adjusted odds ratio (aOR) with 95% confidence interval. A p value < 0.05 was considered statistically significant. Results: The 337 patients enrolled had a mean age of 39.3 (+/- 10.3) years and more (54.3%) were women. TB treatment outcomes included: treatment success in 60.8% (n=205), death in 29.4% (n=99), not evaluated in 5.3% (n=18), loss to follow-up in 5.3% (n=14), and failure in 0.3% (n=1) . After exclusion of patients lost to follow-up and not evaluated, death in TB/HIV co-infected patients during TB treatment was associated with: a TB diagnosis made before national implementation of guidelines regarding initiation of antiretroviral therapy (aOR = 2.50 [1.31-4.78]; p = 0.006), the presence of other AIDS-defining infections (aOR = 2.73 [1.27-5.86]; p = 0.010), non-AIDS comorbidities (aOR = 3.35 [1.37-8.21]; p = 0.008), not receiving co-trimoxazole prophylaxis (aOR = 3.61 [1.71-7.63]; p = 0.001), not receiving antiretroviral therapy (aOR = 2.45 [1.18-5.08]; p = 0.016), and CD4 cell counts < 50 cells/mm3 (aOR = 16.43 [1.05-258.04]; p = 0.047). Conclusions: The success rate of anti-tuberculosis treatment among hospitalized TB/HIV co-infected patients in our setting is low. Mortality in the first 6 months of treatment was high and strongly associated with specific clinical factors including states of greater immunosuppression, highlighting the urgent need for targeted interventions, including provision of anti-retroviral therapy and co-trimoxazole prophylaxis in order to enhance patient outcomes.

Keywords: TB/HIV co-infection, death, treatment outcomes, factors

Procedia PDF Downloads 446
2244 Research on Spatial Pattern and Spatial Structure of Human Settlement from the View of Spatial Anthropology – A Case Study of the Settlement in Sizhai Village, City of Zhuji, Zhejiang Province, China

Authors: Ni Zhenyu

Abstract:

A human settlement is defined as the social activities, social relationships and lifestyles generated within a certain territory, which is also relatively independent territorial living space and domain composed of common people. Along with the advancement of technology and the development of society, the idea, presented in traditional research, that human settlements are deemed as substantial organic integrity with strong autonomy, are more often challenged nowadays. Spatial form of human settlements is one of the most outstanding external expressions with its subjectivity and autonomy, nevertheless, the projections of social, economic activities on certain territories are even more significant. What exactly is the relationship between human beings and the spatial form of the settlements where they live in? a question worth thinking over has been raised, that if a new view, a spatial anthropological one , can be constructed to review and respond to spatial form of human settlements based on research theories and methods of cultural anthropology within the profession of architecture. This article interprets how the typical spatial form of human settlements in the basin area of Bac Giang Province is formed under the collective impacts of local social order, land use condition, topographic features, and social contracts. A particular case of the settlement in Sizhai Village, City of Zhuji, Zhejiang Province is chosen to study for research purpose. Spatial form of human settlements are interpreted as a modeled integrity affected corporately by dominant economy, social patterns, key symbol marks and core values, etc.. Spatial form of human settlements, being a structured existence, is a materialized, behavioral, and social space; it can be considered as a place where human beings realize their behaviors and a path on which the continuity of their behaviors are kept, also for social practice a territory where currant social structure and social relationships are maintained, strengthened and rebuilt. This article aims to break the boundary of understanding that spatial form of human settlements is pure physical space, furthermore, endeavors to highlight the autonomy status of human beings, focusing on their relationships with certain territories, their interpersonal relationships, man-earth relationships and the state of existence of human beings, elaborating the deeper connotation behind spatial form of human settlements.

Keywords: spatial anthropology, human settlement, spatial pattern, spatial structure

Procedia PDF Downloads 411