Search results for: initial contact
182 Carbon-Foam Supported Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells
Authors: Albert Mufundirwa, Satoru Yoshioka, K. Ogi, Takeharu Sugiyama, George F. Harrington, Bretislav Smid, Benjamin Cunning, Kazunari Sasaki, Akari Hayashi, Stephen M. Lyth
Abstract:
Polymer electrolyte membrane fuel cells (PEMFCs) are electrochemical energy conversion devices used for portable, residential and vehicular applications due to their low emissions, high efficiency, and quick start-up characteristics. However, PEMFCs generally use expensive, Pt-based electrocatalysts as electrode catalysts. Due to the high cost and limited availability of platinum, research and development to either drastically reduce platinum loading, or replace platinum with alternative catalysts is of paramount importance. A combination of high surface area supports and nano-structured active sites is essential for effective operation of catalysts. We synthesize carbon foam supports by thermal decomposition of sodium ethoxide, using a template-free, gram scale, cheap, and scalable pyrolysis method. This carbon foam has a high surface area, highly porous, three-dimensional framework which is ideal for electrochemical applications. These carbon foams can have surface area larger than 2500 m²/g, and electron microscopy reveals that they have micron-scale cells, separated by few-layer graphene-like carbon walls. We applied this carbon foam as a platinum catalyst support, resulting in the improved electrochemical surface area and mass activity for the oxygen reduction reaction (ORR), compared to carbon black. Similarly, silver-decorated carbon foams showed higher activity and efficiency for electrochemical carbon dioxide conversion than silver-decorated carbon black. A promising alternative to Pt-catalysts for the ORR is iron-impregnated nitrogen-doped carbon catalysts (Fe-N-C). Doping carbon with nitrogen alters the chemical structure and modulates the electronic properties, allowing a degree of control over the catalytic properties. We have adapted our synthesis method to produce nitrogen-doped carbon foams with large surface area, using triethanolamine as a nitrogen feedstock, in a novel bottom-up protocol. These foams are then infiltrated with iron acetate (FeAc) and pyrolysed to form Fe-N-C foams. The resulting Fe-N-C foam catalysts have high initial activity (half-wave potential of 0.68 VRHE), comparable to that of commercially available Pt-free catalysts (e.g., NPC-2000, Pajarito Powder) in acid solution. In alkaline solution, the Fe-N-C carbon foam catalysts have a half-wave potential of 0.89 VRHE, which is higher than that of NPC-2000 by almost 10 mVRHE, and far out-performing platinum. However, the durability is still a problem at present. The lessons learned from X-ray absorption spectroscopy (XAS), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), and electrochemical measurements will be used to carefully design Fe-N-C catalysts for higher performance PEMFCs.Keywords: carbon-foam, polymer electrolyte membrane fuel cells, platinum, Pt-free, Fe-N-C, ORR
Procedia PDF Downloads 180181 The Influence of a Radio Intervention on Farmers’ Practices in Climate Change Mitigation and Adaptation in Kilifi, Kenya
Authors: Fiona Mwaniki
Abstract:
Climate change is considered a serious threat to sustainable development globally and as one of the greatest ecological, economic and social challenges of our time. The global demand for food is projected to increase by 60% by 2050. Small holder farmers who are vulnerable to the adverse effects of climate change are expected to contribute to this projected demand. Effective climate change education and communication is therefore required for smallholder and subsistence farmers’ in order to build communities that are more climate change aware, prepared and resilient. In Kenya radio is the most important and dominant mass communication tool for agricultural extension. This study investigated the potential role of radio in influencing farmers’ understanding and use of climate change information. The broad aims of this study were three-fold. Firstly, to identify Kenyan farmers’ perceptions and responses to the impacts of climate change. Secondly, to develop radio programs that communicate climate change information to Kenyan farmers and thirdly, to evaluate the impact of information disseminated through radio on farmers’ understanding and responses to climate change mitigation and adaptation. This study was conducted within the farming community of Kilifi County, located along the Kenyan coast. Education and communication about climate change was undertaken using radio to make available information understandable to different social and cultural groups. A mixed methods pre-and post-intervention design that provided the opportunity for triangulating results from both quantitative and qualitative data was used. Quantitative and qualitative data was collected simultaneously, where quantitative data was collected through semi structured surveys with 421 farmers’ and qualitative data was derived from 11 focus group interviews, six interviews with key informants and nine climate change experts. The climate change knowledge gaps identified in the initial quantitative and qualitative data were used in developing radio programs. Final quantitative and qualitative data collection and analysis enabled an assessment of the impact of climate change messages aired through radio on the farming community in Kilifi County. Results of this study indicate that 32% of the farmers’ listened to the radio programs and 26% implemented technologies aired on the programs that would help them adapt to climate change. The most adopted technologies were planting drought tolerant crops including indigenous crop varieties, planting trees, water harvesting and use of manure. The proportion of farmers who indicated they knew “a fair amount” about climate change increased significantly (Z= -5.1977, p < 0.001) from 33% (at the pre intervention phase of this study) to 64% (post intervention). However, 68% of the farmers felt they needed “a lot more” information on agriculture interventions (43%), access to financial resources (21%) and the effects of climate change (15%). The challenges farmers’ faced when adopting the interventions included lack of access to financial resources (18%), high cost of adaptation measures (17%), and poor access to water (10%). This study concludes that radio effectively complements other agricultural extension methods and has the potential to engage farmers’ on climate change issues and motivate them to take action.Keywords: climate change, climate change intervention, farmers, radio
Procedia PDF Downloads 338180 Elements of Creativity and Innovation
Authors: Fadwa Al Bawardi
Abstract:
In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.Keywords: innovation, creativity, factors, tools
Procedia PDF Downloads 55179 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers
Authors: Pawel Martynowicz
Abstract:
Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines
Procedia PDF Downloads 124178 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin
Procedia PDF Downloads 120177 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains
Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy
Abstract:
Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover
Procedia PDF Downloads 215176 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit
Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey
Abstract:
Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D
Procedia PDF Downloads 183175 Microbial Analysis of Street Vended Ready-to-Eat Meat around Thohoyandou Area, Vhembe District, Limpopo Province, RSA
Authors: Tshimangadzo Jeanette Raedani, Edgar Musie, Afsatou Traore
Abstract:
Background: Street-vended meats, including chicken, pork, and beef, are popular in urban areas worldwide due to their convenience and affordability. However, these meats often pose a significant risk of foodborne diseases. The high water activity, protein content, and nearly neutral pH of meat create conditions conducive to the growth of pathogenic bacteria. Street foods, particularly meats, are frequently linked to outbreaks of foodborne illnesses due to potential contamination from improper handling and preparation. This study aimed to assess the microbial quality and safety of street-vended ready-to-eat meat sold in the Thohoyandou area. Method: The study involved collecting 168 samples of street-vended meat, split evenly between chicken (n=84) and beef (n=84), from various vendors around Thohoyandou. The samples were randomly selected and transported in sterile conditions to the Department of Food Microbiology at the University of Venda for analysis. Each 10-gram sample was cultured in selective media: MSA for Staphylococcus aureus, EMB for E. coli O157, XLD agar for Salmonella, and Sorbitol McConkey for Shigella. After initial culturing, the presumptive colonies were sub-cultured for purification and identified through Gram staining and biochemical tests, including Catalase, API 20E, Klingler Iron Agar Test, and Vitek 2 system. Antibiotic susceptibility was tested using agents such as Ampicillin, Chloramphenicol, Penicillin, Neomycin, Tetracycline, Streptomycin, and Amoxicillin. Molecular characterization was performed to identify E. coli pathotypes using multiplex PCR. Results: Out of 168 samples tested, 32 (19%) were positive for Staphylococcus spp., with the highest prevalence found in cooked chicken meat. The most common staphylococcus species identified were S. xylosus (13.2%) and S. saprophyticus (10.5%). E. coli was present in 29 (19.3%) of the samples, with the highest prevalence in fried chicken. Antibiotic susceptibility testing showed that 100% of E. coli isolates were resistant to Ampicillin, Tetracycline, and Penicillin, but 100% were susceptible to Neomycin. Staphylococcus spp. isolates were also 100% resistant to Ampicillin and 100% susceptible to Neomycin. The study detected a range of virulence genes in E. coli, with prevalence rates from 13.33% to 86.67%. The identified pathotypes included EPEC, EHEC, ETEC, EAEC, and EIEC, with many isolates showing mixed pathotypes. Conclusion: The study highlighted that the microbial quality and safety of street-vended meats in Thohoyandou are inadequate, rendering them unsafe for consumption. The presence of pathogenic microorganisms in both beef and chicken samples indicates significant risks associated with poor personal hygiene and food preparation practices. This underscores the need for improved monitoring and stricter food safety measures to prevent foodborne diseases and ensure consumer safety.Keywords: meat, microbial analysis, street vendors, E. coli
Procedia PDF Downloads 27174 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla
Authors: Roxana D. Maiorescu-Murphy
Abstract:
In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk
Procedia PDF Downloads 91173 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification
Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho
Abstract:
The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers
Procedia PDF Downloads 15172 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 225171 Biofuels from Hybrid Poplar: Using Biochemicals and Wastewater Treatment as Opportunities for Early Adoption
Authors: Kevin W. Zobrist, Patricia A. Townsend, Nora M. Haider
Abstract:
Advanced Hardwood Biofuels Northwest (AHB) is a consortium funded by the United States Department of Agriculture (USDA) to research the potential for a system to produce advanced biofuels (jet fuel, diesel, and gasoline) from hybrid poplar in the Pacific Northwest region of the U.S. An Extension team was established as part of the project to examine community readiness and willingness to adopt hybrid as a purpose-grown bioenergy crop. The Extension team surveyed key stakeholder groups, including growers, Extension professionals, policy makers, and environmental groups, to examine attitudes and concerns about growing hybrid poplar for biofuels. The surveys found broad skepticism about the viability of such a system. The top concern for most stakeholder groups was economic viability and the availability of predictable markets. Growers had additional concerns stemming from negative past experience with hybrid poplar as an unprofitable endeavor for pulp and paper production. Additional barriers identified included overall land availability and the availability of water and water rights for irrigation in dry areas of the region. Since the beginning of the project, oil and natural gas prices have plummeted due to rapid increases in domestic production. This has exacerbated the problem with economic viability by making biofuels even less competitive than fossil fuels. However, the AHB project has identified intermediate market opportunities to use poplar as a renewable source for other biochemicals produced by petroleum refineries, such as acetic acid, ethyl acetate, ethanol, and ethylene. These chemicals can be produced at a lower cost with higher yields and higher, more-stable prices. Despite these promising market opportunities, the survey results suggest that it will still be challenging to induce growers to adopt hybrid poplar. Early adopters will be needed to establish an initial feedstock supply for a budding industry. Through demonstration sites and outreach events to various stakeholder groups, the project attracted interest from wastewater treatment facilities, since these facilities are already growing hybrid poplar plantations for applying biosolids and treated wastewater for further purification, clarification, and nutrient control through hybrid poplar’s phytoremediation capabilities. Since these facilities are already using hybrid poplar, selling the wood as feedstock for a biorefinery would be an added bonus rather than something requiring a high rate of return to compete with other crops and land uses. By holding regional workshops and conferences with wastewater professionals, AHB Extension has found strong interest from wastewater treatment operators. In conclusion, there are several significant barriers to developing a successful system for producing biofuels from hybrid poplar, with the largest barrier being economic viability. However, there is potential for wastewater treatment facilities to serve as early adopters for hybrid poplar production for intermediate biochemicals and eventually biofuels.Keywords: hybrid poplar, biofuels, biochemicals, wastewater treatment
Procedia PDF Downloads 268170 Assessing the Severity of Traffic Related Air Pollution in South-East London to School Pupils
Authors: Ho Yin Wickson Cheung, Liora Malki-Epshtein
Abstract:
Outdoor air pollution presents a significant challenge for public health globally, especially in urban areas, with road traffic acting as the primary contributor to air pollution. Several studies have documented the antagonistic relation between traffic-related air pollution (TRAP) and the impact on health, especially to the vulnerable group of population, particularly young pupils. Generally, TRAP could cause damage to their brain, restricting the ability of children to learn and, more importantly, causing detrimental respiratory issues in later life. Butlittle is known about the specific exposure of children at school during the school day and the impact this may have on their overall exposure to pollution at a crucial time in their development. This project has set out to examine the air quality across primary schools in South-East London and assesses the variability of data found based on their geographic location and surroundings. Nitrogen dioxide, PM contaminants, and carbon dioxide were collected with diffusion tubes and portable monitoring equipment for eight schools across three local areas, that are Greenwich, Lewisham, and Tower Hamlets. This study first examines the geographical features of the schools surrounding (E.g., coverage of urban road structure and green infrastructure), then utilize three different methods to capture pollutants data. Moreover, comparing the obtained results with existing data from monitoring stations to understand the differences in air quality before and during the pandemic. Furthermore, most studies in this field have unfortunately neglected human exposure to pollutants and calculated based on values from fixed monitoring stations. Therefore, this paper introduces an alternative approach by calculating human exposure to air pollution from real-time data obtained when commuting within related areas (Driving routes and field walking). It is found that schools located highly close to motorways are generally not suffering from the most air pollution contaminants. Instead, one with the worst traffic congested routes nearby might also result in poor air quality. Monitored results also indicate that the annual air pollution values have slightly decreased during the pandemic. However, the majority of the data is currently still exceeding the WHO guidelines. Finally, the total human exposures for NO2 during commuting in the two selected routes were calculated. Results illustrated the total exposure for route 1 were 21,730 μm/m3 and 28,378.32 μm/m3, and for route 2 were 30,672 μm/m3 and 16,473 μm/m3. The variance that occurred might be due to the difference in traffic volume that requires further research. Exposure for NO2 during commuting was plotted with detailed timesteps that have shown their peak usually occurred while commuting. These have consolidated the initial assumption to the extremeness of TRAP. To conclude, this paper has yielded significant benefits to understanding air quality across schools in London with the new approach of capturing human exposure (Driving routes). Confirming the severity of air pollution and promoting the necessity of considering environmental sustainability for policymakers during decision making to protect society's future pillars.Keywords: air pollution, schools, pupils, congestion
Procedia PDF Downloads 117169 Climate Change Scenario Phenomenon in Malaysia: A Case Study in MADA Area
Authors: Shaidatul Azdawiyah Abdul Talib, Wan Mohd Razi Idris, Liew Ju Neng, Tukimat Lihan, Muhammad Zamir Abdul Rasid
Abstract:
Climate change has received great attention worldwide due to the impact of weather causing extreme events. Rainfall and temperature are crucial weather components associated with climate change. In Malaysia, increasing temperatures and changes in rainfall distribution patterns lead to drought and flood events involving agricultural areas, especially rice fields. Muda Agricultural Development Authority (MADA) is the largest rice growing area among the 10 granary areas in Malaysia and has faced floods and droughts in the past due to changing climate. Changes in rainfall and temperature patter affect rice yield. Therefore, trend analysis is important to identify changes in temperature and rainfall patterns as it gives an initial overview for further analysis. Six locations across the MADA area were selected based on the availability of meteorological station (MetMalaysia) data. Historical data (1991 to 2020) collected from MetMalaysia and future climate projection by multi-model ensemble of climate model from CMIP5 (CNRM-CM5, GFDL-CM3, MRI-CGCM3, NorESM1-M and IPSL-CM5A-LR) have been analyzed using Mann-Kendall test to detect the time series trend, together with standardized precipitation anomaly, rainfall anomaly index, precipitation concentration index and temperature anomaly. Future projection data were analyzed based on 3 different periods; early century (2020 – 2046), middle century (2047 – 2073) and late-century (2074 – 2099). Results indicate that the MADA area does encounter extremely wet and dry conditions, leading to drought and flood events in the past. The Mann-Kendall (MK) trend analysis test discovered a significant increasing trend (p < 0.05) in annual rainfall (z = 0.40; s = 15.12) and temperature (z = 0.61; s = 0.04) during the historical period. Similarly, for both RCP 4.5 and RCP 8.5 scenarios, a significant increasing trend (p < 0.05) was found for rainfall (RCP 4.5: z = 0.15; s = 2.55; RCP 8.5: z = 0.41; s = 8.05;) and temperature (RCP 4.5: z = 0.84; s = 0.02; RCP 8.5: z = 0.94; s = 0.05). Under the RCP 4.5 scenario, the average temperature is projected to increase up to 1.6 °C in early century, 2.0 °C in the middle century and 2.4 °C in the late century. In contrast, under RCP 8.5 scenario, the average temperature is projected to increase up to 1.8 °C in the early century, 3.1 °C in the middle century and 4.3 °C in late century. Drought is projected to occur in 2038 and 2043 (early century); 2052 and 2069 (middle century); and 2095, 2097 to 2099 (late century) under RCP 4.5 scenario. As for RCP 8.5 scenario, drought is projected to occur in 2021, 2031 and 2034 (early century); and 2069 (middle century). No drought is projected to occur in the late century under the RCP 8.5 scenario. Thus, this information can be used for the analysis of the impact of climate change scenarios on rice growth and yield besides other crops found in MADA area. Additionally, this study, it would be helpful for researchers and decision-makers in developing applicable adaptation and mitigation strategies to reduce the impact of climate change.Keywords: climate projection, drought, flood, rainfall, RCP 4.5, RCP 8.5, temperature
Procedia PDF Downloads 77168 Welfare and Sustainability in Beef Cattle Production on Tropical Pasture
Authors: Andre Pastori D'Aurea, Lauriston Bertelli Feranades, Luis Eduardo Ferreira, Leandro Dias Pinto, Fabiana Ayumi Shiozaki
Abstract:
The aim of this study was to improve the production of beef cattle on tropical pasture without harming this environment. On tropical pastures, cattle's live weight gain is lower than feedlot, and forage production is seasonable, changing from season to season. Thus, concerned with sustainable livestock production, the Premix Company has developed strategies to improve the production of beef cattle on tropical pasture to ensure sustainability of welfare and production. There are two important principles in this productivity system: 1) increase individual gains with use of better supplementation and 2) increase the productivity units with better forage quality like corn silage or other forms of forage conservations, actually used only in winter, and adding natural additives in the diet. This production system was applied from June 2017 to May 2018 in the Research Center of Premix Company, Patrocínio Paulista, São Paulo State, Brazil. The area used had 9 hectares of pasture of Brachiaria brizantha. 36 steers Nellore were evaluated for one year. The initial weight was 253 kg. The parameters used were daily average gain and gain per area. This indicated the corrections to be made and helped design future fertilization. In this case, we fertilized the pasture with 30 kg of nitrogen per animal divided into two parts. The diet was pasture and protein-energy supplements (0.4% of live weight). The supplement used was added with natural additive Fator P® – Premix Company). Fator P® is an additive composed by amino acids (lysine, methionine and tyrosine, 16400, 2980 and 3000 mg.kg-1 respectively), minerals, probiotics (Saccharomyces cerevisiae, 7 x 10E8 CFU.kg-1) and essential fatty acids (linoleic and oleic acids, 108.9 and 99g.kg-1 respectively). Due to seasonal changes, in the winter we supplemented the diet by increasing the offer of forage, supplementing with maize silage. It was offered 1% of live weight in silage corn and 0.4% of the live weight in protein-energetic supplements with additive Fator P ®. At the end of the period, the productivity was calculated by summing the individual gains for the area used. The average daily gain of the animals were 693 grams per day and was produced 1.005 kg /hectare/year. This production is about 8 times higher than the average of Brazilian meat national production. To succeed in this project, it is necessary to increase the gains per area, so it is necessary to increase the capacity per area. Pasture management is very important to the project's success because the dietary decisions were taken from the quantity and quality of the forage. We, therefore, recommend the use of animals in the growth phase because the response to supplementation is greater in that phase and we can allocate more animals per area. This system's carbon footprint reduces emissions by 61.2 percent compared to the Brazilian average. This beef cattle production system can be efficient and environmentally friendly to the natural. Another point is that bovines will benefit from their natural environment without competing or having an impact on human food production.Keywords: cattle production, environment, pasture, sustainability
Procedia PDF Downloads 149167 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling
Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky
Abstract:
Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.Keywords: nano-particles, formation damage, permeability, fines migration
Procedia PDF Downloads 621166 Stability of Porous SiC Based Materials under Relevant Conditions of Radiation and Temperature
Authors: Marta Malo, Carlota Soto, Carmen García-Rosales, Teresa Hernández
Abstract:
SiC based composites are candidates for possible use as structural and functional materials in the future fusion reactors, the main role is intended for the blanket modules. In the blanket, the neutrons produced in the fusion reaction slow down and their energy is transformed into heat in order to finally generate electrical power. In the blanket design named Dual Coolant Lead Lithium (DCLL), a PbLi alloy for power conversion and tritium breeding circulates inside hollow channels called Flow Channel Inserts (FCIs). These FCI must protect the steel structures against the highly corrosive PbLi liquid and the high temperatures, but also provide electrical insulation in order to minimize magnetohydrodynamic interactions of the flowing liquid metal with the high magnetic field present in a magnetically confined fusion environment. Due to their nominally high temperature and radiation stability as well as corrosion resistance, SiC is the main choice for the flow channel inserts. The significantly lower manufacturing cost presents porous SiC (dense coating is required in order to assure protection against corrosion and as a tritium barrier) as a firm alternative to SiC/SiC composites for this purpose. This application requires the materials to be exposed to high radiation levels and extreme temperatures, conditions for which previous studies have shown noticeable changes in both the microstructure and the electrical properties of different types of silicon carbide. Both initial properties and radiation/temperature induced damage strongly depend on the crystal structure, polytype, impurities/additives that are determined by the fabrication process, so the development of a suitable material requires full control of these variables. For this work, several SiC samples with different percentage of porosity and sintering additives have been manufactured by the so-called sacrificial template method at the Ceit-IK4 Technology Center (San Sebastián, Spain), and characterized at Ciemat (Madrid, Spain). Electrical conductivity was measured as a function of temperature before and after irradiation with 1.8 MeV electrons in the Ciemat HVEC Van de Graaff accelerator up to 140 MGy (~ 2·10 -5 dpa). Radiation-induced conductivity (RIC) was also examined during irradiation at 550 ºC for different dose rates (from 0.5 to 5 kGy/s). Although no significant RIC was found in general for any of the samples, electrical conductivity increase with irradiation dose was observed to occur for some compositions with a linear tendency. However, first results indicate enhanced radiation resistance for coated samples. Preliminary thermogravimetric tests of selected samples, together with posterior XRD analysis allowed interpret radiation-induced modification of the electrical conductivity in terms of changes in the SiC crystalline structure. Further analysis is needed in order to confirm this.Keywords: DCLL blanket, electrical conductivity, flow channel insert, porous SiC, radiation damage, thermal stability
Procedia PDF Downloads 200165 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 136164 We Have Never Seen a Dermatologist. Prisons Telederma Project Reaching the Unreachable Through Teledermatology
Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa, Annabella Habinka Ejiri
Abstract:
Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to a lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. We aimed at; i) increase awareness and understanding of teledermatology among prison health workers and ii) improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons. Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prison staff with AD. We conducted five days of training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient-Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: We have the very first iconographic atlas of the main dermatoses in pigmented African skin. We increased; i) the proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80%; ii) the proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year; iii) increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year and iv)Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year. Our study contributes to the use, evaluation, and verification of the use of teledermatology to increase access to specialist dermatology services to the most hard to reach areas and vulnerable populations such as that of prisoners.Keywords: teledermatology, prisoners, reaching, un-reachable
Procedia PDF Downloads 101163 Vitamin B9 Separation by Synergic Pertraction
Authors: Blaga Alexandra Cristina, Kloetzer Lenuta, Bompa Amalia Stela, Galaction Anca Irina, Cascaval Dan
Abstract:
Vitamin B9 is an important member of vitamins B group, being a growth factor, important for making genetic material as DNA and RNA, red blood cells, for building muscle tissues, especially during periods of infancy, adolescence and pregnancy. Its production by biosynthesis is based on the high metabolic potential of mutant Bacillus subtilis, due to a superior biodisponibility compared to that obtained by chemical pathways. Pertraction, defined as the extraction and transport through liquid membranes consists in the transfer of a solute between two aqueous phases of different pH-values, phases that are separated by a solvent layer of various sizes. The pertraction efficiency and selectivity could be significantly enhanced by adding a carrier in the liquid membrane, such as organophosphoric compounds, long chain amines or crown-ethers etc., the separation process being called facilitated pertraction. The aim of the work is to determine the impact of the presence of two extractants/carriers in the bulk liquid membrane, i.e. di(2-ethylhexyl) phosphoric acid (D2EHPA) and lauryltrialkylmetilamine (Amberlite LA2) on the transport kinetics of vitamin B9. The experiments have been carried out using two pertraction equipments for a free liquid membrane or bulk liquid membrane. One pertraction cell consists on a U-shaped glass pipe (used for the dichloromethane membrane) and the second one is an H-shaped glass pipe (used for h-heptane), having 45 mm inner diameter of the total volume of 450 mL, the volume of each compartment being of 150 mL. The aqueous solutions are independently mixed by means of double blade stirrers with 6 mm diameter and 3 mm height, having the rotation speed of 500 rpm. In order to reach high diffusional rates through the solvent layer, the organic phase has been mixed with a similar stirrer, at a similar rotation speed (500 rpm). The area of mass transfer surface, both for extraction and for reextraction, was of 1.59x10-³ m2. The study on facilitated pertraction with the mixture of two carriers, namely D2EHPA and Amberlite LA-2, dissolved in two solvents with different polarities: n-heptane and dichloromethane, indicated the possibility to obtain the synergic effect. The synergism has been analyzed by considering the vitamin initial and final mass flows, as well as the permeability factors through liquid membrane. The synergic effect has been observed at low D2EHPA concentrations and high Amberlite LA-2 concentrations, being more important for the low-polar solvent (n-heptane). The results suggest that the mechanism of synergic pertraction consists on the reaction between the organophosphoric carrier and vitamin B9 at the interface between the feed and membrane phases, while the aminic carrier enhances the hydrophobicity of this compound by solvation. However, the formation of this complex reduced the reextraction rate and, consequently, affects the synergism related to the final mass flows and permeability factor. For describing the influences of carriers concentrations on the synergistic coefficients, some equations have been proposed by taking into account the vitamin mass flows or permeability factors, with an average deviations between 4.85% and 10.73%.Keywords: pertraction, synergism, vitamin B9, Amberlite LA-2, di(2-ethylhexyl) phosphoric acid
Procedia PDF Downloads 275162 Music Piracy Revisited: Agent-Based Modelling and Simulation of Illegal Consumption Behavior
Authors: U. S. Putro, L. Mayangsari, M. Siallagan, N. P. Tjahyani
Abstract:
National Collective Management Institute (LKMN) in Indonesia stated that legal music products were about 77.552.008 unit while illegal music products were about 22.0688.225 unit in 1996 and this number keeps getting worse every year. Consequently, Indonesia named as one of the countries with high piracy levels in 2005. This study models people decision toward unlawful behavior, music content piracy in particular, using agent-based modeling and simulation (ABMS). The classification of actors in the model constructed in this study are legal consumer, illegal consumer, and neutral consumer. The decision toward piracy among the actors is a manifestation of the social norm which attributes are social pressure, peer pressure, social approval, and perceived prevalence of piracy. The influencing attributes fluctuate depending on the majority of surrounding behavior called social network. There are two main interventions undertaken in the model, campaign and peer influence, which leads to scenarios in the simulation: positively-framed descriptive norm message, negatively-framed descriptive norm message, positively-framed injunctive norm with benefits message, and negatively-framed injunctive norm with costs message. Using NetLogo, the model is simulated in 30 runs with 10.000 iteration for each run. The initial number of agent was set 100 proportion of 95:5 for illegal consumption. The assumption of proportion is based on the data stated that 95% sales of music industry are pirated. The finding of this study is that negatively-framed descriptive norm message has a worse reversed effect toward music piracy. The study discovers that selecting the context-based campaign is the key process to reduce the level of intention toward music piracy as unlawful behavior by increasing the compliance awareness. The context of Indonesia reveals that that majority of people has actively engaged in music piracy as unlawful behavior, so that people think that this illegal act is common behavior. Therefore, providing the information about how widespread and big this problem is could make people do the illegal consumption behavior instead. The positively-framed descriptive norm message scenario works best to reduce music piracy numbers as it focuses on supporting positive behavior and subject to the right perception on this phenomenon. Music piracy is not merely economical, but rather social phenomenon due to the underlying motivation of the actors which has shifted toward community sharing. The indication of misconception of value co-creation in the context of music piracy in Indonesia is also discussed. This study contributes theoretically that understanding how social norm configures the behavior of decision-making process is essential to breakdown the phenomenon of unlawful behavior in music industry. In practice, this study proposes that reward-based and context-based strategy is the most relevant strategy for stakeholders in music industry. Furthermore, this study provides an opportunity that findings may generalize well beyond music piracy context. As an emerging body of work that systematically constructs the backstage of law and social affect decision-making process, it is interesting to see how the model is implemented in other decision-behavior related situation.Keywords: music piracy, social norm, behavioral decision-making, agent-based model, value co-creation
Procedia PDF Downloads 187161 Permeable Asphalt Pavement as a Measure of Urban Green Infrastructure in the Extreme Events Mitigation
Authors: Márcia Afonso, Cristina Fael, Marisa Dinis-Almeida
Abstract:
Population growth in cities has led to an increase in the infrastructures construction, including buildings and roadways. This aspect leads directly to the soils waterproofing. In turn, changes in precipitation patterns are developing into higher and more frequent intensities. Thus, these two conjugated aspects decrease the rainwater infiltration into soils and increase the volume of surface runoff. The practice of green and sustainable urban solutions has encouraged research in these areas. The porous asphalt pavement, as a green infrastructure, is part of practical solutions set to address urban challenges related to land use and adaptation to climate change. In this field, permeable pavements with porous asphalt mixtures (PA) have several advantages in terms of reducing the runoff generated by the floods. The porous structure of these pavements, compared to a conventional asphalt pavement, allows the rainwater infiltration in the subsoil, and consequently, the water quality improvement. This green infrastructure solution can be applied in cities, particularly in streets or parking lots to mitigate the floods effects. Over the years, the pores of these pavements can be filled by sediment, reducing their function in the rainwater infiltration. Thus, double layer porous asphalt (DLPA) was developed to mitigate the clogging effect and facilitate the water infiltration into the lower layers. This study intends to deepen the knowledge of the performance of DLPA when subjected to clogging. The experimental methodology consisted on four evaluation phases of the DLPA infiltration capacity submitted to three precipitation events (100, 200 and 300 mm/h) in each phase. The evaluation first phase determined the behavior after DLPA construction. In phases two and three, two 500 g/m2 clogging cycles were performed, totaling a 1000 g/m2 final simulation. Sand with gradation accented in fine particles was used as clogging material. In the last phase, the DLPA was subjected to simple sweeping and vacuuming maintenance. A precipitation simulator, type sprinkler, capable of simulating the real precipitation was developed for this purpose. The main conclusions show that the DLPA has the capacity to drain the water, even after two clogging cycles. The infiltration results of flows lead to an efficient performance of the DPLA in the surface runoff attenuation, since this was not observed in any of the evaluation phases, even at intensities of 200 and 300 mm/h, simulating intense precipitation events. The infiltration capacity under clogging conditions decreased about 7% on average in the three intensities relative to the initial performance that is after construction. However, this was restored when subjected to simple maintenance, recovering the DLPA hydraulic functionality. In summary, the study proved the efficacy of using a DLPA when it retains thicker surface sediments and limits the fine sediments entry to the remaining layers. At the same time, it is guaranteed the rainwater infiltration and the surface runoff reduction and is therefore a viable solution to put into practice in permeable pavements.Keywords: clogging, double layer porous asphalt, infiltration capacity, rainfall intensity
Procedia PDF Downloads 491160 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 193159 The Analgesic Effect of Electroacupuncture in a Murine Fibromyalgia Model
Authors: Bernice Jeanne Lottering, Yi-Wen Lin
Abstract:
Introduction: Chronic pain has a definitive lack of objective parameters in the measurement and treatment efficacy of diseases such as Fibromyalgia (FM). Persistent widespread pain and generalized tenderness are the characteristic symptoms affecting a large majority of the global population, particularly females. This disease has indicated a refractory tendency to conventional treatment ventures, largely resultant from a lack of etiological and pathogenic understanding of the disease development. Emerging evidence indicates that the central nervous system (CNS) plays a critical role in the amplification of pain signals and the neurotransmitters associated therewith. Various stimuli have been found to activate the channels existent on nociceptor terminals, thereby actuating nociceptive impulses along the pain pathways. The transient receptor potential vanalloid 1 (TRPV1) channel functions as a molecular integrator for numerous sensory inputs, such as nociception, and was explored in the current study. Current intervention approaches face a multitude challenges, ranging from effective therapeutic interventions to the limitation of pathognomonic criteria resultant from incomplete understanding and partial evidence on the mechanisms of action of FM. It remains unclear whether electroacupuncture (EA) plays an integral role in the functioning of the TRPV1 pathway, and whether or not it can reduce the chronic pain induced by FM. Aims: The aim of this study was to explore the mechanisms underlying the activation and modulation of the TRPV1 channel pathway in a cold stress model of FM applied to a murine model. Furthermore, the effect of EA in the treatment of mechanical and thermal pain, as expressed in FM was also to be investigated. Methods: 18 C57BL/6 wild type and 6 TRPV1 knockout (KO) mice, aged 8-12 weeks, were exposed to an intermittent cold stress-induced fibromyalgia-like pain model, with or without EA treatment at ZusanLi ST36 (2Hz/20min) on day 3 to 5. Von Frey and Hargreaves behaviour tests were implemented in order to analyze the mechanical and thermal pain thresholds on day 0, 3 and 5 in control group (C), FM group (FM), FM mice with EA treated group (FM + EA) and FM in KO group. Results: An increase in mechanical and thermal hyperalgesia was observed in the FM, EA and KO groups when compared to the control group. This initial increase was reduced in the EA group, which directs focus at the treatment efficacy of EA in nociceptive sensitization, and the analgesic effect EA has attenuating FM associated pain. Discussion: An increase in the nociceptive sensitization was observed through higher withdrawal thresholds in the von Frey mechanical test and the Hargreaves thermal test. TRPV1 function in mice has been scientifically associated with these nociceptive conduits, and the increased behaviour test results suggest that TRPV1 upregulation is central to the FM induced hyperalgesia. This data was supported by the decrease in sensitivity observed in results of the TRPV1 KO group. Moreover, the treatment of EA showed a decrease in this FM induced nociceptive sensitization, suggesting TRPV1 upregulation and overexpression can be attenuated by EA at bilateral ST36. This evidence compellingly implies that the analgesic effect of EA is associated with TRPV1 downregulation.Keywords: fibromyalgia, electroacupuncture, TRPV1, nociception
Procedia PDF Downloads 139158 Analysis of Fish Preservation Methods for Traditional Fishermen Boat
Authors: Kusno Kamil, Andi Asni, Sungkono
Abstract:
According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.Keywords: fish, preservation, traditional, fishermen, boat
Procedia PDF Downloads 70157 The Pigeon Circovirus Evolution and Epidemiology under Conditions of One Loft Race Rearing System: The Preliminary Results
Authors: Tomasz Stenzel, Daria Dziewulska, Ewa Łukaszuk, Joy Custer, Simona Kraberger, Arvind Varsani
Abstract:
Viral diseases, especially those leading to impairment of the immune system, are among the most important problems in avian pathology. However, there is not much data available on this subject other than commercial poultry bird species. Recently, increasing attention has been paid to racing pigeons, which have been refined for many years in terms of their ability to return to their place of origin. Currently, these birds are used for races at distances from 100 to 1000 km, and winning pigeons are highly valuable. The rearing system of racing pigeons contradicts the principles of biosecurity, as birds originating from various breeding facilities are commonly transported and reared in “One Loft Race” (OLR) facilities. This favors the spread of multiple infections and provides conditions for the development of novel variants of various pathogens through recombination. One of the most significant viruses occurring in this avian species is the pigeon circovirus (PiCV), which is detected in ca. 70% of pigeons. Circoviruses are characterized by vast genetic diversity which is due to, among other things, the recombination phenomenon. It consists of an exchange of fragments of genetic material among various strains of the virus during the infection of one organism. The rate and intensity of the development of PiCV recombinants have not been determined so far. For this reason, an experiment was performed to investigate the frequency of development of novel PiCV recombinants in racing pigeons kept in OLR-type conditions. 15 racing pigeons originating from 5 different breeding facilities, subclinically infected with various PiCV strains, were housed in one room for eight weeks, which was supposed to mimic the conditions of OLR rearing. Blood and swab samples were collected from birds every seven days to recover complete PiCV genomes that were amplified through Rolling Circle Amplification (RCA), cloned, sequenced, and subjected to bioinformatic analyses aimed at determining the genetic diversity and the dynamics of recombination phenomenon among the viruses. In addition, virus shedding rate/level of viremia, expression of the IFN-γ and interferon-related genes, and anti-PiCV antibodies were determined to enable the complete analysis of the course of infection in the flock. Initial results have shown that 336 full PiCV genomes were obtained, exhibiting nucleotide similarity ranging from 86.6 to 100%, and 8 of those were recombinants originating from viruses of different lofts of origin. The first recombinant appeared after seven days of experiment, but most of the recombinants appeared after 14 and 21 days of joint housing. The level of viremia and virus shedding was the highest in the 2nd week of the experiment and gradually decreased to the end of the experiment, which partially corresponded with Mx 1 gene expression and antibody dynamics. The results have shown that the OLR pigeon-rearing system could play a significant role in spreading infectious agents such as circoviruses and contributing to PiCV evolution through recombination. Therefore, it is worth considering whether a popular gambling game such as pigeon racing is sensible from both animal welfare and epidemiological point of view.Keywords: pigeon circovirus, recombination, evolution, one loft race
Procedia PDF Downloads 72156 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System
Authors: A. Chávez, A. Rodríguez, F. Pinzón
Abstract:
Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.Keywords: sludge, landfill, leachate, SBR
Procedia PDF Downloads 272155 The Impact of Trade on Stock Market Integration of Emerging Markets
Authors: Anna M. Pretorius
Abstract:
The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.Keywords: emerging markets, financial market integration, panel data, trade
Procedia PDF Downloads 306154 Determinants of Child Nutritional Inequalities in Pakistan: Regression-Based Decomposition Analysis
Authors: Nilam Bano, Uzma Iram
Abstract:
Globally, the dilemma of undernutrition has become a notable concern for the researchers, academicians, and policymakers because of its severe consequences for many centuries. The nutritional deficiencies create hurdles for the people to achieve goals related to live a better lifestyle. Not only at micro level but also at the macro level, the consequences of undernutrition affect the economic progress of the country. The initial five years of a child’s life are considered critical for the physical growth and brain development. In this regard, children require special care and good quality food (nutrient intake) to fulfill their nutritional demand of the growing body. Having the sensitive stature and health, children specially under the age of 5 years are more vulnerable to the poor economic, housing, environmental and other social conditions. Beside confronting economic challenges and political upheavals, Pakistan is also going through from a rough patch in the context of social development. Majority of the children are facing serious health problems in the absence of required nutrition. The complexity of this issue is getting severe day by day and specially children are left behind with different type of immune problems and vitamins and mineral deficiencies. It is noted that children from the well-off background are less likely affected by the undernutrition. In order to underline this issue, the present study aims to highlight the existing nutritional inequalities among the children of under five years in Pakistan. Moreover, this study strives to decompose those factors that severely affect the existing nutritional inequality and standing in the queue to capture the consideration of concerned authorities. Pakistan Demographic and Health Survey 2012-13 was employed to assess the relevant indicators of undernutrition such as stunting, wasting, underweight and associated socioeconomic factors. The objectives were executed through the utilization of the relevant empirical techniques. Concentration indices were constructed to measure the nutritional inequalities by utilizing three measures of undernutrition; stunting, wasting and underweight. In addition to it, the decomposition analysis following the logistic regression was made to unfold the determinants that severely affect the nutritional inequalities. The negative values of concentration indices illustrate that children from the marginalized background are affected by the undernutrition more than their counterparts who belong from rich households. Furthermore, the result of decomposition analysis indicates that child age, size of a child at birth, wealth index, household size, parents’ education, mother’s health and place of residence are the most contributing factors in the prevalence of existing nutritional inequalities. Considering the result of the study, it is suggested to the policymakers to design policies in a way so that the health sector of Pakistan can stimulate in a productive manner. Increasing the number of effective health awareness programs for mothers would create a notable difference. Moreover, the education of the parents must be concerned by the policymakers as it has a significant association with the present research in terms of eradicating the nutritional inequalities among children.Keywords: concentration index, decomposition analysis, inequalities, undernutrition, Pakistan
Procedia PDF Downloads 132153 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review
Authors: Zihan Geng, L. Quentin Dixon
Abstract:
It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.Keywords: attention, bilingual advantage, children, executive function
Procedia PDF Downloads 185