Search results for: cause-effect model of resilience
287 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System
Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar
Abstract:
Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel
Procedia PDF Downloads 135286 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 131285 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement
Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas
Abstract:
The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor
Procedia PDF Downloads 91284 Health and Greenhouse Gas Emission Implications of Reducing Meat Intakes in Hong Kong
Authors: Cynthia Sau Chun Yip, Richard Fielding
Abstract:
High meat and especially red meat intakes are significantly and positively associated with a multiple burden of diseases and also high greenhouse gas (GHG) emissions. This study investigated population meat intake patterns in Hong Kong. It quantified the burden of disease and GHG emission outcomes by modeling to adjust Hong Kong population meat intakes to recommended healthy levels. It compared age- and sex-specific population meat, fruit and vegetable intakes obtained from a population survey among adults aged 20 years and over in Hong Kong in 2005-2007, against intake recommendations suggested in the Modelling System to Inform the Revision of the Australian Guide to Healthy Eating (AGHE-2011-MS) technical document. This study found that meat and meat alternatives, especially red meat intakes among Hong Kong males aged 20+ years and over are significantly higher than recommended. Red meat intakes among females aged 50-69 years and other meat and alternatives intakes among aged 20-59 years are also higher than recommended. Taking the 2005-07 age- and sex-specific population meat intake as baselines, three counterfactual scenarios of adjusting Hong Kong adult population meat intakes to AGHE-2011-MS and Pre-2011 AGHE recommendations by the year 2030 were established. Consequent energy intake gaps were substituted with additional legume, fruit and vegetable intakes. To quantify the consequent GHG emission outcomes associated with Hong Kong meat intakes, Cradle-to-ready-to-eat lifecycle assessment emission outcome modelling was used. Comparative risk assessment of burden of disease model was used to quantify the health outcomes. This study found adjusting meat intakes to recommended levels could reduce Hong Kong GHG emission by 17%-44% when compared against baseline meat intake emissions, and prevent 2,519 to 7,012 premature deaths in males and 53 to 1,342 in females, as well as multiple burden of diseases when compared to the baseline meat intake scenario. Comparing lump sum meat intake reduction and outcome measures across the entire population, and using emission factors, and relative risks from individual studies in previous co-benefit studies, this study used age- and sex-specific input and output measures, emission factors and relative risks obtained from high quality meta-analysis and meta-review respectively, and has taken government dietary recommendations into account. Hence evaluations in this study are of better quality and more reflective of real life practices. Further to previous co-benefit studies, this study pinpointed age- and sex-specific population and meat-type-specific intervention points and leverages. When compared with similar studies in Australia, this study also showed that intervention points and leverages among populations in different geographic and cultural background could be different, and that globalization also globalizes meat consumption emission effects. More regional and cultural specific evaluations are recommended to promote more sustainable meat consumption and enhance global food security.Keywords: burden of diseases, greenhouse gas emissions, Hong Kong diet, sustainable meat consumption
Procedia PDF Downloads 311283 Implementing a Comprehensive Emergency Care and Life Support Course in a Low- and Middle-Income Country Setting: A Survey of Learners in India
Authors: Vijayabhaskar Reddy Kandula, Peter Provost Taillac, Balasubramanya M. A., Ram Krishnan Nair, Gokul Toshnival, Vibhu Dhawan, Vijaya Karanam, Buffy Cramer
Abstract:
Introduction: The lack of Emergency Care Services (ECS) is a cause of extensive and serious public health problems in low- and middle-income countries (LMIC), Many LMIC countries have ambulance services that allow timely transfer of ill patients but due to poor care during the ‘Golden Hour’ many deaths occur which are otherwise preventable. Lack of adequate training as evidenced by a study in India is a major reason for poor care during the ‘Golden Hour’. Adapting developed country models which includes staffing specialty-trained doctors in emergency care, is neither feasible nor guarantees cost-effective ECS. Methods: Based on our assessment and felt needs by first-line doctors providing emergency care in 2014, Rajiv Gandhi Health Sciences University’s JeevaRaksha Trust in partnership with the University of Utah, USA, designed, piloted and successfully implemented a 4-day Comprehensive-Emergency Care and Life Support course (C-ECLS) for allopathic doctors. 1730 doctors completed the 4-day course between June 2014 and December- 2020. Subsequently, we conducted a survey to investigate the utilization rates and usefulness of the training. 1662 were contacted but only 309 completed the survey. The respondents had the following designations: Senior faculty (33%), junior faculty (25), Resident (16%), Private-Practitioners (8%), Medical-Officer (16%) and not-working (11%). 51% were generalists (51%) and the rest were specialists (>30 specialties). Results: 97% (271/280) felt they are better doctors because of C-ECLS. 79% (244/309) reported that training helped to save life- specialists more likely than generalists (91% v/s 68%. P<0.05). 64% agreed that they were confident of managing COVID-19 symptomatic patients better because of C-ECLS. 27% (77) were neutral; 9% (24) disagreed. 66% agreed that training helps to be confident in managing COVID-19 critically ill patients. 26% (72) were neutral; 8% (23) disagreed. Frequency of use of C-ECLS skills: Hemorrhage-control (70%), Airway (67%), circulation skills (62%), Safe-transport and communication (60%), managing critically ill patients (58%), cardiac arrest (51%), Trauma (49%), poisoning/animal bites/stings (44%), neonatal-resuscitation (39%), breathing (36%), post-partum-hemorrhage and eclampsia (35%). Among those who used the skills, the majority (ranging from (88%-94%) reported that they were able to apply the skill more effectively because of ECLS training. Conclusion: JeevaRaksha’s C-ECLS is the world’s first comprehensive training. It improves the confidence of front-line doctors and enables them to provide quality care during the ‘Golden Hour’ of emergency. It also prepares doctors to manage unknown emergencies (e.g., COVID-19). C-ECLS was piloted in Morocco, and Uzbekistan and implemented countrywide in Bhutan. C-ECLS is relevant to most settings and offers a replicable model across LMIC.Keywords: comprehensive emergency care and life support, training, capacity building, low- and middle-income countries, developing countries
Procedia PDF Downloads 67282 Pursuing Knowledge Society Excellence: Knowledge Management and Open Innovation Platforms for Research, Industry and Business Collaboration in Singapore
Authors: Irina-Emily Hansen, Ola Jon Mork
Abstract:
The European economic growth strategy and supporting it framework for research and innovation highlight the importance of nurturing new open innovation in order to strengthen Europe’s competitiveness. One of the main approaches to enhance innovation in European society is the Triple Helix model that centres on science- industry collaboration where the universities are assigned the managerial role. In spite of the defined collaboration strategy, the collaboration between academics and in-dustry in Europe has still many challenges. Many of them are explained by culture difference: academic culture aims towards scientific knowledge, while businesses are oriented towards pro-duction and profitable results; also execution of collaborative projects is seen differently by part-ners involved. That proves that traditional management strategies applied to collaboration between researchers and businesses are not effective. There is a need for dynamic strategies that can support the interaction between researchers and industry intensifying knowledge co-creation and contributing to development of national innovation system (NIS) by incorporating individual, organizational and inter-organizational learning. In order to find a good subject to follow, the researchers of a given paper have investigated one of the most rapidly developing knowledge-based, innovation society, Singapore. Singapore does not dispose much land- or sea- resources that normally provide income for any country. Therefore, Singapore was forced to think differently and build society on resources that are available: talented people and knowledge. Singapore has during the last twenty years developed attracting high rated university camps, research institutions and leading industrial companies from all over the world. This article elucidates and elaborates Singapore’s national innovation strategies from Knowledge Management perspective. The research is done on the variety of organizations that enable and support knowledge development in this state: governmental research and development (R&D) centers in universities, private talent incubators for entrepreneurs, and industrial companies with own R&D departments. The research methods are based on presentations, documents, and visits at a number of universities, research institutes, innovation parks, governmental institutions, industrial companies and innovation exhibitions in Singapore. In addition, a literature review of science articles is made regarding the topic. The first finding is that objectives of collaboration between researchers, entrepreneurs and industry in Singapore correspond primary goals of the state: knowledge- and economy growth. There are common objectives for all stakeholders on all national levels. The second finding is that Singapore has enabled system on a national level that supports innovation the entire way from fostering or capturing the new knowledge, providing knowledge exchange and co-creation to application of it in real-life. The conclusion is that innovation means not only new idea, but also the enabling mechanism for its execution and the marked-oriented approach in order that new knowledge can be absorbed in society. The future research can be done with regards to application of Singapore knowledge management strategy in innovation to European countries.Keywords: knowledge management strategy, national innovation system, research industry and business collaboration, knowledge enabling
Procedia PDF Downloads 184281 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry
Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development
Procedia PDF Downloads 91280 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 272279 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump
Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani
Abstract:
The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump
Procedia PDF Downloads 136278 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing
Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama
Abstract:
The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture
Procedia PDF Downloads 290277 Positioning Mama Mkubwa Indigenous Model into Social Work Practice through Alternative Child Care in Tanzania: Ubuntu Perspective
Authors: Johnas Buhori, Meinrad Haule Lembuka
Abstract:
Introduction: Social work expands its boundary to accommodate indigenous knowledge and practice for better competence and services. In Tanzania, Mama Mkubwa Mkubwa (MMM) (Mother’s elder sister) is an indigenous practice of alternative child care that represents other traditional practices across African societies known as Ubuntu practice. Ubuntu is African Humanism with values and approaches that are connected to the social work. MMM focuses on using the elder sister of a deceased mother or father, a trusted elder woman from the extended family or indigenous community to provide alternative care to an orphan or vulnerable child. In Ubuntu's perspective, it takes a whole village or community to raise a child, meaning that every person in the community is responsible for child care. Methodology: A desk review method guided by Ubuntu theory was applied to enrich the study. Findings: MMM resembles the Ubuntu ideal of traditional child protection of those in need as part of alternative child care throughout Tanzanian history. Social work practice, along with other formal alternative child care, was introduced in Tanzania during the colonial era in 1940s and socio-economic problems of 1980s affected the country’s formal social welfare system, and suddenly HIV/AIDS pandemic triggered the vulnerability of children and hampered the capacity of the formal sector to provide social welfare services, including alternative child care. For decades, AIDS has contributed to an influx of orphans and vulnerable children that facilitated the re-emerging of traditional alternative child care at the community level, including MMM. MMM strongly practiced in regions where the AIDS pandemic affected the community, like Njombe, Coastal region, Kagera, etc. Despite of existing challenges, MMM remained to be the remarkably alternative child care practiced in both rural and urban communities integrated with social welfare services. Tanzania envisions a traditional mechanism of family or community environment for alternative child care with the notion that sometimes institutionalization care fails to offer children all they need to become productive members of society, and later, it becomes difficult to reconnect in the society. Implications to Social Work: MMM is compatible with social work by using strengths perspectives; MMM reflects Ubuntu's perspective on the ground of humane social work, using humane methods to achieve human goals. MMM further demonstrates the connectedness of those who care and those cared for and the inextricable link between them as Ubuntu-inspired models of social work that view children from family, community, environmental, and spiritual perspectives. Conclusion: Social work and MMM are compatible at the micro and mezzo levels; thus, application of MMM can be applied in social work practice beyond Tanzania when properly designed and integrated into other systems. When MMM is applied in social work, alternative care has the potential to support not only children but also empower families and communities. Since MMM is a community-owned and voluntary base, it can relieve the government, social workers, and other formal sectors from the annual burden of cost in the provision of institutionalized alternative child care.Keywords: ubuntu, indigenous social work, african social work, ubuntu social work, child protection, child alternative care
Procedia PDF Downloads 66276 The Integration of Apps for Communicative Competence in English Teaching
Authors: L. J. de Jager
Abstract:
In the South African English school curriculum, one of the aims is to achieve communicative competence, the knowledge of using language competently and appropriately in a speech community. Communicatively competent speakers should not only produce grammatically correct sentences but also produce contextually appropriate sentences for various purposes and in different situations. As most speakers of English are non-native speakers, achieving communicative competence remains a complex challenge. Moreover, the changing needs of society necessitate not merely language proficiency, but also technological proficiency. One of the burning issues in the South African educational landscape is the replacement of the standardised literacy model by the pedagogy of multiliteracies that incorporate, by default, the exploration of technological text forms that are part of learners’ everyday lives. It foresees learners as decoders, encoders, and manufacturers of their own futures by exploiting technological possibilities to constantly create and recreate meaning. As such, 21st century learners will feel comfortable working with multimodal texts that are intrinsically part of their lives and by doing so, become authors of their own learning experiences while teachers may become agents supporting learners to discover their capacity to acquire new digital skills for the century of multiliteracies. The aim is transformed practice where learners use their skills, ideas, and knowledge in new contexts. This paper reports on a research project on the integration of technology for language learning, based on the technological pedagogical content knowledge framework, conceptually founded in the theory of multiliteracies, and which aims to achieve communicative competence. The qualitative study uses the community of inquiry framework to answer the research question: How does the integration of technology transform language teaching of preservice teachers? Pre-service teachers in the Postgraduate Certificate of Education Programme with English as methodology were purposively selected to source and evaluate apps for teaching and learning English. The participants collaborated online in a dedicated Blackboard module, using discussion threads to sift through applicable apps and develop interactive lessons using the Apps. The selected apps were entered on to a predesigned Qualtrics form. Data from the online discussions, focus group interviews, and reflective journals were thematically and inductively analysed to determine the participants’ perceptions and experiences when integrating technology in lesson design and the extent to which communicative competence was achieved when using these apps. Findings indicate transformed practice among participants and research team members alike with a better than average technology acceptance and integration. Participants found value in online collaboration to develop and improve their own teaching practice by experiencing directly the benefits of integrating e-learning into the teaching of languages. It could not, however, be clearly determined whether communicative competence was improved. The findings of the project may potentially inform future e-learning activities, thus supporting student learning and development in follow-up cycles of the project.Keywords: apps, communicative competence, English teaching, technology integration, technological pedagogical content knowledge
Procedia PDF Downloads 163275 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 109274 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel
Authors: Valentina Dolmova
Abstract:
What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology
Procedia PDF Downloads 154273 An Architecture of Ingenuity and Empowerment
Authors: Timothy Gray
Abstract:
This paper will present work and discuss lessons learned during a semester-long travel study based in Southeast Asia, which was run in the Spring Semester of 2019 and again in the summer of 2023. The first travel group consisted of fifteen students, and the second group consisted of twelve students ranging from second-year to graduate level, student participants majoring in either architecture or planning. Students worked in interdisciplinary teams, each team beginning their travel study, living together in a separate small town for over a month in (relatively) remote conditions in rural Thailand. Students became intimately familiar with these towns, forged strong personal relationships, and built reservoirs of knowledge one conversation at a time. Rather than impose external ideas and solutions, students were asked to learn from and be open to lessons from the people and the place. The following design statement was used as a point of departure for their investigations: It is our shared premise that architecture exists in small villages and towns of Southeast Asia in the ingenuity of the people, that architecture exists in a shared language of making, modifying, and reusing. It is a modest but vibrant architecture, an architecture that is alive and evolving, an architecture that is small in scale, accessible, and one that emerges from the people. It is an architecture that can exist in a modified bicycle, a woven bamboo bridge, or a self-built community. Students were challenged to engage in existing conditions as design professionals, both empowering and lending coherence to the energies that already existed in the place. As one of the student teams noted in their design narrative: “During our field study, we had the unique opportunity to tour a number of informal settlements and meet and talk to residents through interpreters. We found that many of the residents work in nearby factories for dollars a day. Others find employment in self-generated informal economies such as hand carving and textiles. Despite extreme poverty, we found these places to be vibrant and full of life as people navigate these challenging conditions to live lives with purpose and dignity.” Students worked together with local community members and colleagues to develop a series of varied proposals that emerged from their interrogations of place and partnered with community members and professional colleagues in the development of these proposals. Project partners included faculty and student colleagues Yangon University, the mayor's Office, Planning Department Officials and religious leaders in Sawankhalok, Thailand, and community leaders in Natonchan, Thailand, to name a few. This paper will present a series of student community-based design projects that emerged from these conditions. The paper will also discuss this model of travel study as a way of building an architecture which uses social and cultural issues as a catalyst for design. The paper will discuss lessons relative to sustainable development that the Western students learned through their travels in Southeast Asia.Keywords: travel study, CAPasia, architecture of empowerment, modular housing
Procedia PDF Downloads 47272 Born in Limbo, Living in Limbo and Probably Will Die in Limbo
Authors: Betty Chiyangwa
Abstract:
The subject of second-generation migrant youth is under-researched in the context of South Africa. Thus, their opinions and views have been marginalised in social science research. This paper addresses this gap by exploring the complexities of second-generation Mozambican migrant youth’s lived experiences in how they construct their identities and develop a sense of belonging in post-apartheid South Africa, specifically in Bushbuckridge. Bushbuckridge was among the earliest districts to accommodate Mozambican refugees to South Africa in the 1970s and remains associated with large numbers of Mozambicans. Drawing on Crenshaw’s (1989) intersectionality approach, the study contributes to knowledge on South-to-South migration by demonstrating how this approach is operationalised to understand the complex lived experiences of a disadvantaged group in life and possibly in death. In conceptualising the notion of identity among second-generation migrant youth, this paper explores the history and present of first and second-generation Mozambican migrants in South Africa to reveal how being born to migrant parents and raised in a hosting country poses life-long complications in one’s identity and sense of belonging. In the quest to form their identities and construct a sense of belonging, migrant youth employ precariously means to navigate the terrane. This is a case study informed by semi-structured interviews and narrative data gathered from 22 second-generation Mozambican migrant youth between 18 and 34 years who were born to at least one Mozambican parent living in Bushbuckridge and raised in South Africa. Views of two key informants from the South African Department of Home Affairs and the local tribal authority provided additional perspectives on second-generation migrant youth’s lived experiences in Bushbuckridge, which were explored thematically and narratively through Braun and Clarke’s (2012) six-step framework for analysing qualitative data. In exploring the interdependency and interconnectedness of social categories and social systems in Bushbuckridge, the findings revealed that participants’ experiences of identity formation and development of a sense of belonging were marginalised in complex, intersectional and precarious ways where they constantly (re)negotiated their daily experiences, which were largely shaped by their paradoxical migrant status in a host country. This study found that, in the quest for belonging, migrant youths were not a perfectly integrated category but evolved from almost daily lived experiences of creating a living that gave them an identity and a sense of belonging in South Africa. The majority of them shared feelings of living in limbo since childhood and fear of possibly dying in limbo with no clear (solid) sense of belonging to either South Africa or Mozambique. This study concludes that there is a strong association between feelings of identity, sense of belonging and levels of social integration. It recommends the development and adoption of a multilayer comprehensive model for understanding second-generation migrant youth identity and belonging in South Africa which encourages a collaborative effort among individual migrant youth, their family members, neighbours, society, and regional and national institutional structures for migrants to enhance and harness their capabilities and improve their wellbeing in South Africa.Keywords: bushbuckridge, limbo, mozambican migrants, second-generation
Procedia PDF Downloads 70271 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles
Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska
Abstract:
In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2
Procedia PDF Downloads 267270 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania
Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea
Abstract:
A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality
Procedia PDF Downloads 128269 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium
Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee
Abstract:
Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.Keywords: GABA, Lactobacillus, HPLC, dairy sludge
Procedia PDF Downloads 144268 Hegemonic Salaryman Masculinity: Case Study of Transitional Male Gender Roles in Today's Japan
Authors: D. Norton
Abstract:
This qualitative study focuses on the lived experience and displacement of young white-collar masculinities in Japan. In recent years, the salaryman lifestyle has undergone significant disruption - increased competition for regular employment, rise in non-regular structurings of labour across public/private sectors, and shifting role expectations within the home. Despite this, related scholarship hints at a continued reinforcement of the traditional male gender role - that the salaryman remains a key benchmark of Japanese masculine identity. For those in structural proximity to these more ‘normative’ performativities, interest lies their engagement with such narratives - how they make sense of their masculinity in response to stated changes. In light of the historical emphasis on labour and breadwinning logics, notions of respective security or precarity generated as a result remain unclear. Similarly, concern extends to developments within the private sphere - by what means young white-collar men construct ideas of singlehood and companionship according to traditional gender ideologies or more contemporary, flexible readings. The influence of these still-emergent status distinctions on the logics of the social group in question is yet to be explored in depth by gender scholars. This project, therefore, focuses on a salaryman archetype as hegemonic - its transformation amidst these changes and socialising mechanisms that continue to legitimate unequal gender hierarchies. For data collection, a series of ethnographic interviews were held over a period of 12 months with university-educated, white-collar male employees from both Osaka and the Greater Tokyo Area. Findings suggest a modern salaryman ideal reflecting both continuities and shifts within white-collar employment. Whilst receptive to more contemporary workplace practices, the narratives of those interviewed remain imbued with logics supporting patterns of internal hegemony. Regular/non-regular distinction emerged as the foremost variable for both material and discursive patterns of white-collar stratification, with variants of displacement for each social group. Despite the heightened valorisation of stable employment, regular workers articulated various concerns over a model of corporate masculinity seen to be incompatible with recent socioeconomic developments. Likewise, non-regular employees face detachment owing to a still-inflexible perception of their working masculinity as marginalized amidst economic precarity. In seeking to negotiate respective challenges, those interviewed demonstrated an engagement with various concurrent social changes that would often either accommodate, reinforce, or expand upon traditional role behaviours. Few of these narratives offered any notable transgression of said ideal, however, suggesting that within the spectre of white-collar employment in Japan for the near future, any substantive transformation of corporate masculinity remains dependant upon economic developments, less so the agency of those involved.Keywords: gender ideologies, hegemonic masculinity, Japan, white-collar employment
Procedia PDF Downloads 125267 The Impact of Trade on Stock Market Integration of Emerging Markets
Authors: Anna M. Pretorius
Abstract:
The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.Keywords: emerging markets, financial market integration, panel data, trade
Procedia PDF Downloads 306266 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 142265 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection
Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly
Abstract:
Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography
Procedia PDF Downloads 368264 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities
Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun
Abstract:
The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids
Procedia PDF Downloads 64263 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 94262 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach
Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie
Abstract:
Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling
Procedia PDF Downloads 67261 Co-Culture with Murine Stromal Cells Enhances the In-vitro Expansion of Hematopoietic Stem Cells in Response to Low Concentrations of Trans-Resveratrol
Authors: Mariyah Poonawala, Selvan Ravindran, Anuradha Vaidya
Abstract:
Despite much progress in understanding the regulatory factors and cytokines that support the maturation of the various cell lineages of the hematopoietic system, factors that govern the self-renewal and proliferation of hematopoietic stem cells (HSCs) is still a grey area of research. Hematopoietic stem cell transplantation (HSCT) has evolved over the years and gained tremendous importance in the treatment of both malignant and non-malignant diseases. However, factors such as graft rejection and multiple organ failure have challenged HSCT from time to time, underscoring the urgent need for development of milder processes for successful hematopoietic transplantation. An emerging concept in the field of stem cell biology states that the interactions between the bone-marrow micro-environment and the hematopoietic stem and progenitor cells is essential for regulation, maintenance, commitment and proliferation of stem cells. Understanding the role of mesenchymal stromal cells in modulating the functionality of HSCs is, therefore, an important area of research. Trans-resveratrol has been extensively studied for its various properties to combat and prevent cancer, diabetes and cardiovascular diseases etc. The aim of the present study was to understand the effect of trans-resveratrol on HSCs using single and co-culture systems. We have used KG1a cells since it is a well accepted hematopoietic stem cell model system. Our preliminary experiments showed that low concentrations of trans-resveratrol stimulated the HSCs to undergo proliferation whereas high concentrations of trans-resveratrol did not stimulate the cells to proliferate. We used a murine fibroblast cell line, M210B4, as a stromal feeder layer. On culturing the KG1a cells with M210B4 cells, we observed that the stimulatory as well as inhibitory effects of trans-resveratrol at low and high concentrations respectively, were enhanced. Our further experiments showed that low concentration of trans-resveratrol reduced the generation of reactive oxygen species (ROS) and nitric oxide (NO) whereas high concentrations increased the oxidative stress in KG1a cells. We speculated that perhaps the oxidative stress was imposing inhibitory effects at high concentration and the same was confirmed by performing an apoptotic assay. Furthermore, cell cycle analysis and growth kinetic experiments provided evidence that low concentration of trans-resveratrol reduced the doubling time of the cells. Our hypothesis is that perhaps at low concentration of trans-resveratrol the cells get pushed into the G0/G1 phase and re-enter the cell cycle resulting in their proliferation, whereas at high concentration the cells are perhaps arrested at G2/M phase or at cytokinesis and therefore undergo apoptosis. Liquid Chromatography-Quantitative-Time of Flight–Mass Spectroscopy (LC-Q-TOF MS) analyses indicated the presence of trans-resveratrol and its metabolite(s) in the supernatant of the co-cultured cells incubated with high concentration of trans-resveratrol. We conjecture that perhaps the metabolites of trans-resveratrol are responsible for the apoptosis observed at the high concentration. Our findings may shed light on the unsolved problems in the in vitro expansion of stem cells and may have implications in the ex vivo manipulation of HSCs for therapeutic purposes.Keywords: co-culture system, hematopoietic micro-environment, KG1a cell line, M210B4 cell line, trans-resveratrol
Procedia PDF Downloads 258260 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa
Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak
Abstract:
Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 200259 Community Engagement: Experience from the SIREN Study in Sub-Saharan Africa
Authors: Arti Singh, Carolyn Jenkins, Oyedunni S. Arulogun, Mayowa O. Owolabi, Fred S. Sarfo, Bruce Ovbiagele, Enzinne Sylvia
Abstract:
Background: Stroke, the leading cause of adult-onset disability and the second leading cause of death, is a major public health concern particularly pertinent in Sub-Saharan Africa (SSA), where nearly 80% of all global stroke mortalities occur. The Stroke Investigative Research and Education Network (SIREN) seeks to comprehensively characterize the genomic, sociocultural, economic, and behavioral risk factors for stroke and to build effective teams for research to address and decrease the burden of stroke and other non communicable diseases in SSA. One of the first steps to address this goal was to effectively engage the communities that suffer the high burden of disease in SSA. This study describes how the SIREN project engaged six sites in Ghana and Nigeria over the past three years, describing the community engagement activities that have arisen since inception. Aim: The aim of community engagement (CE) within SIREN is to elucidate information about knowledge, attitudes, beliefs, and practices (KABP) about stroke and its risk factors from individuals of African ancestry in SSA, and to educate the community about stroke and ways to decrease disabilities and deaths from stroke using socioculturally appropriate messaging and messengers. Methods: Community Advisory Board (CABs), Focus Group Discussions (FGDs) and community outreach programs. Results: 27 FGDs with 168 participants including community heads, religious leaders, health professionals and individuals with stroke among others, were conducted, and over 60 CE outreaches have been conducted within the SIREN performance sites. Over 5,900 individuals have received education on cardiovascular risk factors and about 5,000 have been screened for cardiovascular risk factors during the outreaches. FGDs and outreach programs indicate that knowledge of stroke, as well as risk factors and follow-up evidence-based care is limited and often late. Other findings include: 1) Most recognize hypertension as a major risk factor for stroke. 2) About 50% report that stroke is hereditary and about 20% do not know organs affected by stroke. 3) More than 95% willing to participate in genetic testing research and about 85% willing to pay for testing and recommend the test to others. 4) Almost all indicated that genetic testing could help health providers better treat stroke and help scientists better understand the causes of stroke. The CABs provided stakeholder input into SIREN activities and facilitated collaborations among investigators, community members and stakeholders. Conclusion: The CE core within SIREN is a first-of-its kind public outreach engagement initiative to evaluate and address perceptions about stroke and genomics by patients, caregivers, and local leaders in SSA and has implications as a model for assessment in other high-stroke risk populations. SIREN’s CE program uses best practices to build capacity for community-engaged research, accelerate integration of research findings into practice and strengthen dynamic community-academic partnerships within our communities. CE has had several major successes over the past three years including our multi-site collaboration examining the KABP about stroke (symptoms, risk factors, burden) and genetic testing across SSA.Keywords: community advisory board, community engagement, focus groups, outreach, SSA, stroke
Procedia PDF Downloads 428258 Investigating the Thermal Comfort Properties of Mohair Fabrics
Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman
Abstract:
Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance
Procedia PDF Downloads 142