Search results for: culture and local wisdom knowledge
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14969

Search results for: culture and local wisdom knowledge

209 Becoming Vegan: The Theory of Planned Behavior and the Moderating Effect of Gender

Authors: Estela Díaz

Abstract:

This article aims to make three contributions. First, build on the literature on ethical decision-making literature by exploring factors that influence the intention of adopting veganism. Second, study the superiority of extended models of the Theory of Planned Behavior (TPB) for understanding the process involved in forming the intention of adopting veganism. Third, analyze the moderating effect of gender on TPB given that attitudes and behavior towards animals are gender-sensitive. No study, to our knowledge, has examined these questions. Veganism is not a diet but a political and moral stand that exclude, for moral reasons, the use of animals. Although there is a growing interest in studying veganism, it continues being overlooked in empirical research, especially within the domain of social psychology. TPB has been widely used to study a broad range of human behaviors, including moral issues. Nonetheless, TPB has rarely been applied to examine ethical decisions about animals and, even less, to veganism. Hence, the validity of TPB in predicting the intention of adopting veganism remains unanswered. A total of 476 non-vegan Spanish university students (55.6% female; the mean age was 23.26 years, SD= 6.1) responded to online and pencil-and-paper self-reported questionnaire based on previous studies. TPB extended models incorporated two background factors: ‘general attitudes towards humanlike-attributes ascribed to animals’ (AHA) (capacity for reason/emotions/suffer, moral consideration, and affect-towards-animals); and ‘general attitudes towards 11 uses of animals’ (AUA). SPSS 22 and SmartPLS 3.0 were used for statistical analyses. This study constructed a second-order reflective-formative model and took the multi-group analysis (MGA) approach to study gender effects. Six models of TPB (the standard and five competing) were tested. No a priori hypotheses were formulated. The results gave partial support to TPB. Attitudes (ATTV) (β = .207, p < .001), subjective norms (SNV) (β = .323, p < .001), and perceived control behavior (PCB) (β = .149, p < .001) had a significant direct effect on intentions (INTV). This model accounted for 27,9% of the variance in intention (R2Adj = .275) and had a small predictive relevance (Q2 = .261). However, findings from this study reveal that contrary to what TPB generally proposes, the effect of the background factors on intentions was not fully mediated by the proximal constructs of intentions. For instance, in the final model (Model#6), both factors had significant multiple indirect effect on INTV (β = .074, 95% C = .030, .126 [AHA:INTV]; β = .101, 95% C = .055, .155 [AUA:INTV]) and significant direct effect on INTV (β = .175, p < .001 [AHA:INTV]; β = .100, p = .003 [AUA:INTV]). Furthermore, the addition of direct paths from background factors to intentions improved the explained variance in intention (R2 = .324; R2Adj = .317) and the predictive relevance (Q2 = .300) over the base-model. This supports existing literature on the superiority of enhanced TPB models to predict ethical issues; which suggests that moral behavior may add additional complexity to decision-making. Regarding gender effect, MGA showed that gender only moderated the influence of AHA on ATTV (e.g., βWomen−βMen = .296, p < .001 [Model #6]). However, other observed gender differences (e.g. the explained variance of the model for intentions were always higher for men that for women, for instance, R2Women = .298; R2Men = .394 [Model #6]) deserve further considerations, especially for developing more effective communication strategies.

Keywords: veganism, Theory of Planned Behavior, background factors, gender moderation

Procedia PDF Downloads 347
208 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes

Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert

Abstract:

The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.

Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG

Procedia PDF Downloads 327
207 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 161
206 Refurbishment Methods to Enhance Energy Efficiency of Brick Veneer Residential Buildings in Victoria

Authors: Hamid Reza Tabatabaiefar, Bita Mansoury, Mohammad Javad Khadivi Zand

Abstract:

The current energy and climate change impacts of the residential building sector in Australia are significant. Thus, the Australian Government has introduced more stringent regulations to improve building energy efficiency. In 2006, the Australian residential building sector consumed about 11% (around 440 Petajoule) of the total primary energy, resulting in total greenhouse gas emissions of 9.65 million tonnes CO2-eq. The gas and electricity consumption of residential dwellings contributed to 30% and 52% respectively, of the total primary energy utilised by this sector. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Employing sustainable design principles and effective use of construction materials can play a crucial role in improving thermal performance of new and existing buildings. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. One of the issues concerning the refurbishment of residential buildings is mostly the consumer market, where most work consists of moderate refurbishment jobs, often without assistance of an architect and partly without a building permit. There is an individual and often fragmental approach that results in lack of efficiency. Most importantly, the decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process, as a reflection of the design outcome. Finally, studies have identified the lack of knowledge, experience and best-practice examples as barriers in refurbishment projects. In the context of sustainable development and the need to reduce energy demand, refurbishing the ageing residential building constitutes a necessary action. Not only it does provide huge potential for energy savings, but it is also economically and socially relevant. Although the advantages have been identified, the guidelines come in the form of general suggestions that fail to address the diversity of each project. As a result, it has been recognised that there is a strong need to develop guidelines for optimised retrofitting of existing residential buildings in order to improve their energy performance. The current study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of residential brick veneer buildings in Victoria (Australia). Proposing different remedial solutions for improving the energy performance of residential brick veneer buildings, in the simulation stage, annual energy usage analyses have been carried out to determine heating and cooling energy consumptions of the buildings for different proposed retrofitting techniques. Then, the results of employing different retrofitting methods have been examined and compared in order to identify the most efficient and cost-effective remedial solution for improving the energy performance of those buildings with respect to the climate condition in Victoria and construction materials of the studied benchmark building.

Keywords: brick veneer residential buildings, building energy efficiency, climate change impacts, cost effective remedial solution, energy performance, sustainable design principles

Procedia PDF Downloads 291
205 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance

Authors: Soheila Sadeghi

Abstract:

Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.

Keywords: contract variation impact, quantitative analysis, project performance, claims analysis

Procedia PDF Downloads 40
204 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 316
203 The Adolescent Vaping Crisis in Urban India

Authors: Arushi S. Goyal, Jo Aggarwal, Ravi Jasuja

Abstract:

Statement of the Problem: Vapes have always been marketed as safer alternatives to traditional cigarettes; however, research suggests that perceived safety of e-cigarettes use may be overstated. While the addictive properties of nicotine have garnered significant scientific interest, the adverse effects of ‘inert’ ingredients in vapes are being investigated only recently. Seemingly harmless components in vapes such as propylene glycol have been shown to damage astrocytes and oligodendrocytes, and certain flavorings are causatively associated with neuroinflammation. With ease of concealment and varied aromas, vape usage amongst high school students continues unabated in countries like India, which have instituted comprehensive bans on e-cigarettes. With overt government ban, there is paucity of public data on determinants of teenage vaping patterns and parental engagement in curbing this debilitating dependency. Additionally, the large body of peer reviewed studies on vaping have been primarily conducted in Western countries. Accordingly, the purpose of this study was to examine the factors affecting the causes and attitudes towards vaping among adolescents in urban India, as well as the gaps in parental awareness. We posit that this study would lay out a reusable framework for extending the studies across conservative societies where adolescents support vaping behavior even with strong governmental policies. Methodology & Theoretical Orientation: Two surveys were used to collect data. Participants from eight private schools in Bangalore completed an online survey. The first survey sampled adolescents aged 14-18, while the second surveyed the parents of children in the same age group from the same schools. Informed consent was obtained from all participants, and all data collected was anonymous. Results: We find substantial discordance in self-reported vape use by the adolescents and the parents’ knowledge of their child’s exposure to vaping. Over one fifth of respondents (22.4%) reported using e-cigarettes, while only 5% of parents reported that their children used e-cigarettes. Even though over 70% of adolescents believe that vaping is addictive, only 22.8% of respondents were aware of the components, or the extent of its impact. While peer pressure is often perceived to be the enabling factor, curiosity was reported as the primary reason for the initiation. Adolescents who vape saw regulations on sales and marketing as the most effective deterrent. In contrast, parents and other students leaned on school infrastructure to intervene. There has been a significant increase in vaping and a substantial discordance between parental perceptions and adolescent vaping. Conclusion & Significance: Despite a complete ban, vapes continue to be easily accessible. The data suggests that an open discussion about the adverse health consequences of untested, “seemingly inert” ingredients in these unregulated vape liquids would galvanize the student community by demystifying vaping. While increased regulation against the sale of vapes deters open use, increased parental involvement could enable open dialog with children and assist in reducing the prevalence of vaping. A reduction in vaping could have a considerable impact on the health and educational outcomes for the youth of India.

Keywords: adolescent, e-cigarettes, health consequences, India, parental awareness, vapes

Procedia PDF Downloads 24
202 Extremism among College and High School Students in Moscow: Diagnostics Features

Authors: Puzanova Zhanna Vasilyevna, Larina Tatiana Igorevna, Tertyshnikova Anastasia Gennadyevna

Abstract:

In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.

Keywords: extremism, youth extremism, diagnostics of extremist manifestations, forecast of behavior, sociological polls, theory of psychotypes, FACS, SPAFF

Procedia PDF Downloads 337
201 Saline Aspiration Negative Intravascular Test: Mitigating Risk with Injectable Fillers

Authors: Marcelo Lopes Dias Kolling, Felipe Ferreira Laranjeira, Guilherme Augusto Hettwer, Pedro Salomão Piccinini, Marwan Masri, Carlos Oscar Uebel

Abstract:

Introduction: Injectable fillers are among the most common nonsurgical cosmetic procedures, with significant growth yearly. Knowledge of rheological and mechanical characteristics of fillers, facial anatomy, and injection technique is essential for safety. Concepts such as the use of cannula versus needle, aspiration before injection, and facial danger zones have been well discussed. In case of an accidental intravascular puncture, the pressure inside the vessel may not be sufficient to push blood into the syringe due to the characteristics of the filler product; this is especially true for calcium hydroxyapatite (CaHA) or hyaluronic acid (HA) fillers with high G’. Since viscoelastic properties of normal saline are much lower than those of fillers, aspiration with saline prior to filler injection may decrease the risk of a false negative aspiration and subsequent catastrophic effects. We discuss a technique to add an additional safety step to the procedure with saline aspiration prior to injection, a ‘’reverse Seldinger’’ technique for intravascular access, which we term SANIT: Saline Aspiration Negative Intravascular Test. Objectives: To demonstrate the author’s (PSP) technique which adds an additional safety step to the process of filler injection, with both CaHA and HA, in order to decrease the risk of intravascular injection. Materials and Methods: Normal skin cleansing and topical anesthesia with prilocaine/lidocaine cream are performed; the facial subunits to be treated are marked. A 3mL Luer lock syringe is filled with 2mL of 0.9% normal saline and a 27G needle, which is turned one half rotation. When a cannula is to be used, the Luer lock syringe is attached to a 27G 4cm single hole disposable cannula. After skin puncture, the 3mL syringe is advanced with the plunger pulled back (negative pressure). Progress is made to the desired depth, all the while aspirating. Once the desired location of filler injection is reached, the syringe is exchanged for the syringe containing a filler, securely grabbing the hub of the needle and taking care to not dislodge the needle tip. Prior to this, we remove 0.1mL of filler to allow for space inside the syringe for aspiration. We again aspirate and inject retrograde. SANIT is especially useful for CaHA, since the G’ is much higher than HA, and thus reflux of blood into the syringe is less likely to occur. Results: The technique has been used safely for the past two years with no adverse events; the increase in cost is negligible (only the cost of 2mL of normal saline). Over 100 patients (over 300 syringes) have been treated with this technique. The risk of accidental intravascular puncture has been calculated to be between 1:6410 to 1:40882 syringes among expert injectors; however, the consequences of intravascular injection can be catastrophic even with board-certified physicians. Conclusions: While the risk of intravascular filler injection is low, the consequences can be disastrous. We believe that adding the SANIT technique can help further mitigate risk with no significant untoward effects and could be considered by all performing injectable fillers. Further follow-up is ongoing.

Keywords: injectable fillers, safety, saline aspiration, injectable filler complications, hyaluronic acid, calcium hydroxyapatite

Procedia PDF Downloads 150
200 Bio-Electro Chemical Catalysis: Redox Interactions, Storm and Waste Water Treatment

Authors: Michael Radwan Omary

Abstract:

Context: This scientific innovation demonstrate organic catalysis engineered media effective desalination of surface and groundwater. The author has developed a technology called “Storm-Water Ions Filtration Treatment” (SWIFTTM) cold reactor modules designed to retrofit typical urban street storm drains or catch basins. SWIFT triggers biochemical redox reactions with water stream-embedded toxic total dissolved solids (TDS) and electrical conductivity (EC). SWIFTTM Catalysts media unlock the sub-molecular bond energy, break down toxic chemical bonds, and neutralize toxic molecules, bacteria and pathogens. Research Aim: This research aims to develop and design lower O&M cost, zero-brine discharge, energy input-free, chemical-free water desalination and disinfection systems. The objective is to provide an effective resilient and sustainable solution to urban storm-water and groundwater decontamination and disinfection. Methodology: We focused on the development of organic, non-chemical, no-plugs, no pumping, non-polymer and non-allergenic approaches for water and waste water desalination and disinfection. SWIFT modules operate by directing the water stream to flow freely through the electrically charged media cold reactor, generating weak interactions with a water-dissolved electrically conductive molecule, resulting in the neutralization of toxic molecules. The system is powered by harvesting sub-molecular bonds embedded in energy. Findings: The SWIFTTM Technology case studies at CSU-CI and CSU-Fresno Water Institute, demonstrated consistently high reduction of all 40 detected waste-water pollutants including pathogens to levels below a state of California Department of Water Resources “Drinking Water Maximum Contaminants Levels”. The technology has proved effective in reducing pollutants such as arsenic, beryllium, mercury, selenium, glyphosate, benzene, and E. coli bacteria. The technology has also been successfully applied to the decontamination of dissolved chemicals, water pathogens, organic compounds and radiological agents. Theoretical Importance: SWIFT technology development, design, engineering, and manufacturing, offer cutting-edge advancement in achieving clean-energy source bio-catalysis media solution, an energy input free water and waste water desalination and disinfection. A significant contribution to institutions and municipalities achieving sustainable, lower cost, zero-brine and zero CO2 discharges clean energy water desalination. Data Collection and Analysis Procedures: The researchers collected data on the performance of the SWIFTTM technology in reducing the levels of various pollutants in water. The data was analyzed by comparing the reduction achieved by the SWIFTTM technology to the Drinking Water Maximum Contaminants Levels set by the state of California. The researchers also conducted live oral presentations to showcase the applications of SWIFTTM technology in storm water capture and decontamination as well as providing clean drinking water during emergencies. Conclusion: The SWIFTTM Technology has demonstrated its capability to effectively reduce pollutants in water and waste water to levels below regulatory standards. The Technology offers a sustainable solution to groundwater and storm-water treatments. Further development and implementation of the SWIFTTM Technology have the potential to treat storm water to be reused as a new source of drinking water and an ambient source of clean and healthy local water for recharge of ground water.

Keywords: catalysis, bio electro interactions, water desalination, weak-interactions

Procedia PDF Downloads 67
199 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad

Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood

Abstract:

Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.

Keywords: CALPUFF, daura refinery, Iraq, pollutants

Procedia PDF Downloads 197
198 Combined Civilian and Military Disaster Response: A Critical Analysis of the 2010 Haiti Earthquake Relief Effort

Authors: Matthew Arnaouti, Michael Baird, Gabrielle Cahill, Tamara Worlton, Michelle Joseph

Abstract:

Introduction: Over ten years after the 7.0 magnitude Earthquake struck the capital of Haiti, impacting over three million people and leading to the deaths of over two hundred thousand, the multinational humanitarian response remains the largest disaster relief effort to date. This study critically evaluates the multi-sector and multinational disaster response to the Earthquake, looking at how the lessons learned from this analysis can be applied to future disaster response efforts. We put particular emphasis on assessing the interaction between civilian and military sectors during this humanitarian relief effort, with the hopes of highlighting how concrete guidelines are essential to improve future responses. Methods: An extensive scoping review of the relevant literature was conducted - where library scientists conducted reproducible, verified systematic searches of multiple databases. Grey literature and hand searches were utilised to identify additional unclassified military documents, for inclusion in the study. More than 100 documents were included for data extraction and analysis. Key domains were identified, these included: Humanitarian and Military Response, Communication, Coordination, Resources, Needs Assessment and Pre-Existing Policy. Corresponding information and lessons-learned pertaining to these domains was then extracted - detailing the barriers and facilitators to an effective response. Results: Multiple themes were noted which stratified all identified domains - including the lack of adequate pre-existing policy, as well as extensive ambiguity of actors’ roles. This ambiguity was continually influenced by the complex role the United States military played in the disaster response. At a deeper level, the effects of neo-colonialism and concern about infringements on Haitian sovereignty played a substantial role at all levels: setting the pre-existing conditions and determining the redevelopment efforts that followed. Furthermore, external factors significantly impacted the response, particularly the loss of life within the political and security sectors. This was compounded by the destruction of important infrastructure systems - particularly electricity supplies and telecommunication networks, as well as air and seaport capabilities. Conclusions: This study stands as one of the first and most comprehensive evaluations, systematically analysing the civilian and military response - including their collaborative efforts. This study offers vital information for improving future combined responses and provides a significant opportunity for advancing knowledge in disaster relief efforts - which remains a more pressing issue than ever. The categories and domains formulated serve to highlight interdependent factors that should be applied in future disaster responses, with significant potential to aid the effective performance of humanitarian actors. Further studies will be grounded in these findings, particularly the need for greater inclusion of the Haitian perspective in the literature, through additional qualitative research studies.

Keywords: civilian and military collaboration, combined response, disaster, disaster response, earthquake, Haiti, humanitarian response

Procedia PDF Downloads 127
197 Rethinking the Languages for Specific Purposes Syllabus in the 21st Century: Topic-Centered or Skills-Centered

Authors: A. Knezović

Abstract:

21st century has transformed the labor market landscape in a way of posing new and different demands on university graduates as well as university lecturers, which means that the knowledge and academic skills students acquire in the course of their studies should be applicable and transferable from the higher education context to their future professional careers. Given the context of the Languages for Specific Purposes (LSP) classroom, the teachers’ objective is not only to teach the language itself, but also to prepare students to use that language as a medium to develop generic skills and competences. These include media and information literacy, critical and creative thinking, problem-solving and analytical skills, effective written and oral communication, as well as collaborative work and social skills, all of which are necessary to make university graduates more competitive in everyday professional environments. On the other hand, due to limitations of time and large numbers of students in classes, the frequently topic-centered syllabus of LSP courses places considerable focus on acquiring the subject matter and specialist vocabulary instead of sufficient development of skills and competences required by students’ prospective employers. This paper intends to explore some of those issues as viewed both by LSP lecturers and by business professionals in their respective surveys. The surveys were conducted among more than 50 LSP lecturers at higher education institutions in Croatia, more than 40 HR professionals and more than 60 university graduates with degrees in economics and/or business working in management positions in mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into consideration in this research, including reading and listening comprehension of specialist texts, acquisition of specialist vocabulary and grammatical structures, as well as presentation and negotiation skills. The ability to hold meetings, conduct business correspondence, write reports, academic texts, case studies and take part in debates were also taken into consideration, as well as informal business communication, business etiquette and core courses delivered in a foreign language. The results of the surveys conducted among LSP lecturers will be analyzed with reference to what extent those elements are included in their courses and how consistently and thoroughly they are evaluated according to their course requirements. Their opinions will be compared to the results of the surveys conducted among professionals from a range of industries in Croatia so as to examine how useful and important they perceive the same elements of the LSP course content in their working environments. Such comparative analysis will thus show to what extent the syllabi of LSP courses meet the demands of the employment market when it comes to the students’ language skills and competences, as well as transferable skills. Finally, the findings will also be compared to the observations based on practical teaching experience and the relevant sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely open-ended questions that do not have conclusive answers, but might prompt LSP lecturers to re-evaluate the content and objectives of their course syllabi.

Keywords: languages for specific purposes (LSP), language skills, topic-centred syllabus, transferable skills

Procedia PDF Downloads 308
196 Assessing Measures and Caregiving Experiences of Thai Caregivers of Persons with Dementia

Authors: Piyaorn Wajanatinapart, Diane R. Lauver

Abstract:

The number of persons with dementia (PWD) has increased. Informal caregivers are the major providing care. They can have perceived gains and burdens. Caregivers who reported high in perceived gains may report low in burdens and better health. Gaps of caregiving literature were: no report psychometrics in a few studies and unclear definitions of gains; most studies with no theory-guided and conducting in Western countries; not fully described relationships among caregiving variables: motivations, satisfaction with psychological needs, social support, gains, burdens, and physical and psycho-emotional health. Those gaps were filled by assessing psychometric properties of selected measures, providing clearly definitions of gains, using self-determination theory (SDT) to guide the study, and developing the study in Thailand. The study purposes were to evaluate six measures for internal consistency reliability, content validity, and construct validity. This study also examined relationships of caregiving variables: motivations (controlled and autonomous motivations), satisfaction with psychological needs (autonomy, competency, and relatedness), perceived social support, perceived gains, perceived burdens, and physical and psycho-emotional health. This study was a cross-sectional and correlational descriptive design with two convenience samples. Sample 1 was five Thai experts to assess content validity of measures. Sample 2 was 146 Thai caregivers of PWD to assess construct validity, reliability, and relationships among caregiving variables. Experts rated questionnaires and sent them back via e-mail. Caregivers answered questionnaires at clinics of four Thai hospitals. Data analysis was used descriptive statistics and bivariate and multivariate analyses using the composite indicator structural equation model to control measurement errors. For study results, most caregivers were female (82%), middle age (M =51.1, SD =11.9), and daughters (57%). They provided care for 15 hours/day with 4.6 years. The content validity indices of items and scales were .80 or higher for clarity and relevance. Experts suggested item revisions. Cronbach’s alphas were .63 to .93 of ten subscales of four measures and .26 to .57 of three subscales. The gain scale was acceptable for construct validity. With controlling covariates, controlled motivations, the satisfaction with three subscales of psychological needs, and perceived social support had positive relationships with physical and psycho-emotional health. Both satisfaction with autonomy subscale and perceived social support had negative relationship with perceived burdens. The satisfaction with three subscales of psychological needs had positive relationships among them. Physical and psycho-emotional health subscales had positive relationships with each other. Furthermore, perceived burdens had negative relationships with physical and psycho-emotional health. This study was the first use SDT to describe relationships of caregiving variables in Thailand. Caregivers’ characteristics were consistent with literature. Four measures were valid and reliable except two measures. Breadth knowledge about relationships was provided. Interpretation of study results was cautious because of using same sample to evaluate psychometric properties of measures and relationships of caregiving variables. Researchers could use four measures for further caregiving studies. Using a theory would help describe concepts, propositions, and measures used. Researchers may examine the satisfaction with psychological needs as mediators. Future studies to collect data with caregivers in communities are needed.

Keywords: caregivers, caregiving, dementia, measures

Procedia PDF Downloads 308
195 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 329
194 External Program Evaluation: Impacts and Changes on Government-Assisted Refugee Mothers

Authors: Akiko Ohta, Masahiro Minami, Yusra Qadir, Jennifer York

Abstract:

The Home Instruction for Parents of Preschool Youngsters (HIPPY) is a home instruction program for mothers of children 3 to 5 years old. Using role-play as a method of teaching, the participating mothers work with their home visitors and learn how to deliver the HIPPY curriculum to their children. Applying HIPPY, Reviving Hope and Home for High-risk Refugee Mothers Program (RHH) was created to provide more personalized peer support and to respond to ongoing settlement challenges for isolated and vulnerable Government Assisted Refugee (GAR) mothers. GARs often have greater needs and vulnerabilities than other refugee groups. While the support is available, they often face various challenges and barriers in starting their new lives in Canada, such as inadequate housing, low first-language literacy levels, low competency in English or French, and social isolation. The pilot project was operated by Mothers Matter Centre (MMC) from January 2019 to March 2021 in partnership with the Immigrant Services Society of BC (ISSofBC). The formative evaluation was conducted by a research team at Simon Fraser University. In order to provide more suitable support for GAR mothers, RHH intended to offer more flexibility in HIPPY delivery, supported by a home visitor, to meet the need of refugee mothers facing various conditions and challenges; to have a pool of financial resources to be used for the RHH families when necessitated during the program period; to have another designated staff member, called a community navigator, assigned to facilitate the support system for the RHH families in their settlement; to have a portable device available for each RHH mother to navigate settlement support resources; and to provide other variations of the HIPPY curriculum as an option for the RHH mothers, including a curriculum targeting pre-HIPPY age children. Reflections on each program component was collected from RHH mothers and staff members of MMC and ISSofBC, including frontline workers and management staff, through individual interviews and focus group discussions. Each of the RHH program components was analyzed and evaluated by applying Moore’s four domains framework to identify key information and generate new knowledge (data). To capture RHH mothers’ program experience more in depth based on their own reflections, the photovoice method was used. Some photos taken by the mothers will be shared to illustrate their RHH experience as part of their life stories. Over the period of the program, this evaluation observed how RHH mothers became more confident in various domains, such as communicating with others, taking public transportations alone, and teaching their own child(ren). One of the major factors behind the success was their home visitors’ flexibility and creativity to create a more meaningful and tailored approach for each mother, depending on her background and personal situation. The role of the community navigator was tested out and improved during the program period. The community navigators took the key role to assess the needs of the RHH families and connect them with community resources. Both the home visitors and community navigators were immigrant mothers themselves and owing to their dedicated care for the RHH mothers; they were able to gain trust and work closely and efficiently with RHH mothers.

Keywords: refugee mothers, settlement support, program evaluation, Canada

Procedia PDF Downloads 171
193 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 303
192 Effect of Rapeseed Press Cake on Extrusion System Parameters and Physical Pellet Quality of Fish Feed

Authors: Anna Martin, Raffael Osen

Abstract:

The demand for fish from aquaculture is constantly growing. Concurrently, due to a shortage of fishmeal caused by extensive overfishing, fishmeal substitution by plant proteins is getting increasingly important for the production of sustainable aquafeed. Several research studies evaluated the impact of plant protein meals, concentrates or isolates on fish health and fish feed quality. However, these protein raw materials often require elaborate and expensive manufacturing and their availability is limited. Rapeseed press cake (RPC) – a side product of de-oiling processes – exhibits a high potential as a plant-based fishmeal alternative in fish feed for carnivorous species due to its availability, low costs and protein content. In order to produce aquafeed with RPC, it is important to systematically assess i) inclusion levels of RPC with similar pellet qualities compared to fishmeal containing formulations and ii) how extrusion parameters can be adjusted to achieve targeted pellet qualities. However, the effect of RPC on extrusion system parameters and pellet quality has only scarcely been investigated. Therefore, the aim of this study was to evaluate the impact of feed formulation, extruder barrel temperature (90, 100, 110 °C) and screw speed (200, 300, 400 rpm) on extrusion system parameters and the physical properties of fish feed pellets. A co-rotating pilot-scale twin screw extruder was used to produce five iso-nitrogenous feed formulations: a fish meal based reference formulation including 16 g/100g fishmeal and four formulations in which fishmeal was substituted by RPC to 25, 50, 75 or 100 %. Extrusion system parameters, being product temperature, pressure at the die, specific mechanical energy (SME) and torque, were monitored while samples were taken. After drying, pellets were analyzed regarding to optical appearance, sectional and longitudinal expansion, sinking velocity, bulk density, water stability, durability and specific hardness. In our study, the addition of minor amounts of RPC already had high impact on pellet quality parameters, especially on expansion but only marginally affected extrusion system parameters. Increasing amounts of RPC reduced sectional expansion, sinking velocity, bulk density and specific hardness and increased longitudinal expansion compared to a reference formulation without RPC. Water stability and durability were almost not affected by RPC addition. Moreover, pellets with rapeseed components showed a more coarse structure than pellets containing only fishmeal. When the adjustment of barrel temperature and screw speed was investigated, it could be seen that the increase of extruder barrel temperature led to a slight decrease of SME and die pressure and an increased sectional expansion of the reference pellets but did almost not affect rapeseed containing fish feed pellets. Also changes in screw speed had little effects on the physical properties of pellets however with raised screw speed the SME and the product temperature increased. In summary, a one-to-one substitution of fishmeal with RPC without the adjustment of extrusion process parameters does not result in fish feed of a designated quality. Therefore, a deeper knowledge of raw materials and their behavior under thermal and mechanical stresses as applied during extrusion is required.

Keywords: extrusion, fish feed, press cake, rapeseed

Procedia PDF Downloads 148
191 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 386
190 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 77
189 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients

Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg

Abstract:

Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.

Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis

Procedia PDF Downloads 339
188 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain

Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson

Abstract:

Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.

Keywords: supply chain, traceability system, halal food, stakeholders’ requirements

Procedia PDF Downloads 112
187 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 215
186 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.

Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty

Procedia PDF Downloads 109
185 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 302
184 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 124
183 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 302
182 Implementing Urban Rainwater Harvesting Systems: Between Policy and Practice

Authors: Natàlia Garcia Soler, Timothy Moss

Abstract:

Despite the multiple benefits of sustainable urban drainage, as demonstrated in numerous case studies across the world, urban rainwater harvesting techniques are generally restricted to isolated model projects. The leap from niche to mainstream has, in most cities, proved an elusive goal. Why policies promoting rainwater harvesting are limited in their widespread implementation has seldom been subjected to systematic analysis. Much of the literature on the policy, planning and institutional contexts of these techniques focus either on their potential benefits or on project design, but very rarely on a critical-constructive analysis of past experiences of implementation. Moreover, the vast majority of these contributions are restricted to single-case studies. There is a dearth of knowledge with respect to, firstly, policy implementation processes and, secondly, multi-case analysis. Insights from both, the authors argue, are essential to inform more effective rainwater harvesting in cities in the future. This paper presents preliminary findings from a research project on rainwater harvesting in cities from a social science perspective that is funded by the Swedish Research Foundation (Formas). This project – UrbanRain – is examining the challenges and opportunities of mainstreaming rainwater harvesting in three European cities. The paper addresses two research questions: firstly, what lessons can be learned on suitable policy incentives and planning instruments for rainwater harvesting from a meta-analysis of the relevant international literature and, secondly, how far these lessons are reflected in a study of past and ongoing rainwater harvesting projects in a European forerunner city. This two-tier approach frames the structure of the paper. We present, first, the results of the literature analysis on policy and planning issues of urban rainwater harvesting. Here, we analyze quantitatively and qualitatively the literature of the past 15 years on this topic in terms of thematic focus, issues addressed and key findings and draw conclusions on research gaps, highlighting the need for more studies on implementation factors, actor interests, institutional adaptation and multi-level governance. In a second step we focus in on the experiences of rainwater harvesting in Berlin and present the results of a mapping exercise on a wide variety of projects implemented there over the last 30 years. Here, we develop a typology to characterize the rainwater harvesting projects in terms of policy issues (what problems and goals are targeted), project design (which kind of solutions are envisaged), project implementation (how and when they were implemented), location (whether they are in new or existing urban developments) and actors (which stakeholders are involved and how), paying particular attention to the shifting institutional framework in Berlin. Mapping and categorizing these projects is based on a combination of document analysis and expert interviews. The paper concludes by synthesizing the findings, identifying how far the goals, governance structures and instruments applied in the Berlin projects studied reflect the findings emerging from the meta-analysis of the international literature on policy and planning issues of rainwater harvesting and what implications these findings have for mainstreaming such techniques in future practice.

Keywords: institutional framework, planning, policy, project implementation, urban rainwater management

Procedia PDF Downloads 287
181 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
180 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 221