Search results for: data-based tourist account
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3104

Search results for: data-based tourist account

164 Hybrid Renewable Energy Systems for Electricity and Hydrogen Production in an Urban Environment

Authors: Same Noel Ngando, Yakub Abdulfatai Olatunji

Abstract:

Renewable energy micro-grids, such as those powered by solar or wind energy, are often intermittent in nature. This means that the amount of energy generated by these systems can vary depending on weather conditions or other factors, which can make it difficult to ensure a steady supply of power. To address this issue, energy storage systems have been developed to increase the reliability of renewable energy micro-grids. Battery systems have been the dominant energy storage technology for renewable energy micro-grids. Batteries can store large amounts of energy in a relatively small and compact package, making them easy to install and maintain in a micro-grid setting. Additionally, batteries can be quickly charged and discharged, allowing them to respond quickly to changes in energy demand. However, the process involved in recycling batteries is quite costly and difficult. An alternative energy storage system that is gaining popularity is hydrogen storage. Hydrogen is a versatile energy carrier that can be produced from renewable energy sources such as solar or wind. It can be stored in large quantities at low cost, making it suitable for long-distance mass storage. Unlike batteries, hydrogen does not degrade over time, so it can be stored for extended periods without the need for frequent maintenance or replacement, allowing it to be used as a backup power source when the micro-grid is not generating enough energy to meet demand. When hydrogen is needed, it can be converted back into electricity through a fuel cell. Energy consumption data is got from a particular residential area in Daegu, South Korea, and the data is processed and analyzed. From the analysis, the total energy demand is calculated, and different hybrid energy system configurations are designed using HOMER Pro (Hybrid Optimization for Multiple Energy Resources) and MATLAB software. A techno-economic and environmental comparison and life cycle assessment (LCA) of the different configurations using battery and hydrogen as storage systems are carried out. The various scenarios included PV-hydrogen-grid system, PV-hydrogen-grid-wind, PV-hydrogen-grid-biomass, PV-hydrogen-wind, PV-hydrogen-biomass, biomass-hydrogen, wind-hydrogen, PV-battery-grid-wind, PV- battery -grid-biomass, PV- battery -wind, PV- battery -biomass, and biomass- battery. From the analysis, the least cost system for the location was the PV-hydrogen-grid system, with a net present cost of about USD 9,529,161. Even though all scenarios were environmentally friendly, taking into account the recycling cost and pollution involved in battery systems, all systems with hydrogen as a storage system produced better results. In conclusion, hydrogen is becoming a very prominent energy storage solution for renewable energy micro-grids. It is easier to store compared with electric power, so it is suitable for long-distance mass storage. Hydrogen storage systems have several advantages over battery systems, including flexibility, long-term stability, and low environmental impact. The cost of hydrogen storage is still relatively high, but it is expected to decrease as more hydrogen production, and storage infrastructure is built. With the growing focus on renewable energy and the need to reduce greenhouse gas emissions, hydrogen is expected to play an increasingly important role in the energy storage landscape.

Keywords: renewable energy systems, microgrid, hydrogen production, energy storage systems

Procedia PDF Downloads 65
163 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer

Authors: Sebastian Hiergeist

Abstract:

It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.

Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)

Procedia PDF Downloads 190
162 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 139
161 Ultrasonic Atomizer for Turbojet Engines

Authors: Aman Johri, Sidhant Sood, Pooja Suresh

Abstract:

This paper suggests a new and more efficient method of atomization of fuel in a combustor nozzle of a high bypass turbofan engine, using ultrasonic vibrations. Since atomization of fuel just before the fuel spray is injected into the combustion chamber is an important and crucial aspect related to functioning of a propulsion system, the technology suggested by this paper and the experimental analysis on the system components eventually proves to assist in complete and rapid combustion of the fuel in the combustor module of the engine. Current propulsion systems use carburetors, atomization nozzles and apertures in air intake pipes for atomization. The idea of this paper is to deploy new age hybrid technology, namely the Ultrasound Field Effect (UFE) to effectively atomize fuel before it enters the combustion chamber, as a viable and effective method to increase efficiency and improve upon existing designs. The Ultrasound Field Effect is applied axially, on diametrically opposite ends of an atomizer tube that gloves onto the combustor nozzle, where the fuel enters and exits under a pre-defined pressure. The Ultrasound energy vibrates the fuel particles to a breakup frequency. At reaching this frequency, the fuel particles start disintegrating into smaller diameter particles perpendicular to the axis of application of the field from the parent boundary layer of fuel flow over the baseplate. These broken up fuel droplets then undergo swirling effect as per the original nozzle design, with a higher breakup ratio than before. A significant reduction of the size of fuel particles eventually results in an increment in the propulsive efficiency of the engine. Moreover, the Ultrasound atomizer operates within a control frequency such that effects of overheating and induced vibrations are least felt on the overall performance of the engine. The design of an electrical manifold for the multiple-nozzle system over a typical can-annular combustor is developed along with this study, such that the product can be installed and removed easily for maintenance and repairing, can allow for easy access for inspections and transmits least amount of vibrational energy to the surface of the combustor. Since near-field ultrasound is used, the vibrations are easily controlled, thereby successfully reducing vibrations on the outer shell of the combustor. Experimental analysis is carried out on the effect of ultrasonic vibrations on flowing jet turbine fuel using an ultrasound generator probe and results of an effective decrease in droplet size across a constant diameter, away from the boundary layer of flow is noted using visual aid by observing under ultraviolet light. The choice of material for the Ultrasound inducer tube and crystal along with the operating range of temperatures, pressures, and frequencies of the Ultrasound field effect are also studied in this paper, while taking into account the losses incurred due to constant vibrations and thermal loads on the tube surface.

Keywords: atomization, ultrasound field effect, titanium mesh, breakup frequency, parent boundary layer, baseplate, propulsive efficiency, jet turbine fuel, induced vibrations

Procedia PDF Downloads 216
160 Navigating States of Emergency: A Preliminary Comparison of Online Public Reaction to COVID-19 and Monkeypox on Twitter

Authors: Antonia Egli, Theo Lynn, Pierangelo Rosati, Gary Sinclair

Abstract:

The World Health Organization (WHO) defines vaccine hesitancy as the postponement or complete denial of vaccines and estimates a direct linkage to approximately 1.5 million avoidable deaths annually. This figure is not immune to public health developments, as has become evident since the global spread of COVID-19 from Wuhan, China in early 2020. Since then, the proliferation of influential, but oftentimes inaccurate, outdated, incomplete, or false vaccine-related information on social media has impacted hesitancy levels to a degree described by the WHO as an infodemic. The COVID-19 pandemic and related vaccine hesitancy levels have in 2022 resulted in the largest drop in childhood vaccinations of the 21st century, while the prevalence of online stigma towards vaccine hesitant consumers continues to grow. Simultaneously, a second disease has risen to global importance: Monkeypox is an infection originating from west and central Africa and, due to racially motivated online hate, was in August 2022 set to be renamed by the WHO. To better understand public reactions towards two viral infections that became global threats to public health no two years apart, this research examines user replies to threads published by the WHO on Twitter. Replies to two Tweets from the @WHO account declaring COVID-19 and Monkeypox as ‘public health emergencies of international concern’ on January 30, 2020, and July 23, 2022, are gathered using the Twitter application programming interface and user mention timeline endpoint. Research methodology is unique in its analysis of stigmatizing, racist, and hateful content shared on social media within the vaccine discourse over the course of two disease outbreaks. Three distinct analyses are conducted to provide insight into (i) the most prevalent topics and sub-topics among user reactions, (ii) changes in sentiment towards the spread of the two diseases, and (iii) the presence of stigma, racism, and online hate. Findings indicate an increase in hesitancy to accept further vaccines and social distancing measures, the presence of stigmatizing content aimed primarily at anti-vaccine cohorts and racially motivated abusive messages, and a prevalent fatigue towards disease-related news overall. This research provides value to non-profit organizations or government agencies associated with vaccines and vaccination programs in emphasizing the need for public health communication fitted to consumers' vaccine sentiments, levels of health information literacy, and degrees of trust towards public health institutions. Considering the importance of addressing fears among the vaccine hesitant, findings also illustrate the risk of alienation through stigmatization, lead future research in probing the relatively underexamined field of online, vaccine-related stigma, and discuss the potential effects of stigma towards vaccine hesitant Twitter users in their decisions to vaccinate.

Keywords: social marketing, social media, public health communication, vaccines

Procedia PDF Downloads 78
159 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices

Authors: Zhuang Yiwen

Abstract:

The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.

Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms

Procedia PDF Downloads 52
158 Family Firm Internationalization: Identification of Alternative Success Pathways

Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser

Abstract:

In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.

Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth

Procedia PDF Downloads 218
157 The Temperature Degradation Process of Siloxane Polymeric Coatings

Authors: Andrzej Szewczak

Abstract:

Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.

Keywords: silicones, siloxanes, surface hardness, temperature, water absorption

Procedia PDF Downloads 221
156 Language Skills in the Emergent Literacy of Spanish-Speaking Children with Autism Spectrum Disorders

Authors: Adriana Salgado, Sandra Castaneda, Ivan Perez

Abstract:

Learning to read and write is a complex process involving several cognitive skills, contextual, and cultural environments. The basis of this development is linguistic skills, such as the ability to name and understand vocabulary, retell a story, phonological awareness, letter knowledge, among others. In children with autism spectrum disorder (ASD), one of the main concerns is related to language disorders. Nevertheless, most of the children with ASD are able to decode written information but have difficulties in reading comprehension. The research of these processes in the Spanish-speaking population is limited. However, the increasing prevalence of this diagnosis (1 in 115 children) in Mexico has implications at different levels. Educational research is an important area of interest in ASD children, such as emergent literacy. Reading and writing expand the possibilities of academic, cultural, and social information access. Taking this information into account, the objective of this research was to identify the relationship between language skills, alphabet knowledge, phonological awareness, and early reading and writing in ASD Spanish-speaking children. The method used for this research was based on tasks that were selected, adapted and in some cases designed to measure initial reading and writing, as well as language skills (naming, receptive vocabulary, and narrative skills), phonological awareness (similar phonological word pairs, beginning sound awareness and spelling) and letter knowledge, in a sample of 45 children (38 boys and 7 girls) with prior diagnosis of ASD. Descriptive analyses, as well as bivariate correlations, cluster analysis, and canonical correspondence, were obtained for the data results. Results showed that variability was large; however, it was possible to characterize the sample in low, medium, and high score groups regarding children performance. The low score group (46.7% of the sample), had a null or deficient performance in language skills and phonological awareness, some could identify up to five letters of the alphabet, showed no early reading skills but they could scribble. The middle score group was characterized by a highly variable performance in different tasks, with better language skills in receptive and naming vocabulary, some narrative, letter knowledge, and phonological awareness (beginning sound awareness) skills. The high score group, (24.4% of the sample) had the best performance in language skills in relation to the sample data, as well as in the rest of the measured skills. Finally, scores were canonically correlated between naming, receptive vocabulary, narrative, phonological awareness, letter knowledge and initial learning of reading and writing skills for the high score group and letter knowledge, naming and receptive vocabulary for the lower score group, which is consistent with previous research in typical and ASD children. In conclusion, the obtained data is consistent with previous studies. Despite large variability, it was possible to identify performance profiles and relations based on linguistic, phonological awareness, and letter knowledge skills. These skills were predictor variables of the initial development of reading and writing. The above has implications for a future program and strategies development that may benefit the acquisition of reading and writing in ASD children.

Keywords: autism, autism spectrum disorders, early literacy, emergent literacy

Procedia PDF Downloads 116
155 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 113
154 Health and Greenhouse Gas Emission Implications of Reducing Meat Intakes in Hong Kong

Authors: Cynthia Sau Chun Yip, Richard Fielding

Abstract:

High meat and especially red meat intakes are significantly and positively associated with a multiple burden of diseases and also high greenhouse gas (GHG) emissions. This study investigated population meat intake patterns in Hong Kong. It quantified the burden of disease and GHG emission outcomes by modeling to adjust Hong Kong population meat intakes to recommended healthy levels. It compared age- and sex-specific population meat, fruit and vegetable intakes obtained from a population survey among adults aged 20 years and over in Hong Kong in 2005-2007, against intake recommendations suggested in the Modelling System to Inform the Revision of the Australian Guide to Healthy Eating (AGHE-2011-MS) technical document. This study found that meat and meat alternatives, especially red meat intakes among Hong Kong males aged 20+ years and over are significantly higher than recommended. Red meat intakes among females aged 50-69 years and other meat and alternatives intakes among aged 20-59 years are also higher than recommended. Taking the 2005-07 age- and sex-specific population meat intake as baselines, three counterfactual scenarios of adjusting Hong Kong adult population meat intakes to AGHE-2011-MS and Pre-2011 AGHE recommendations by the year 2030 were established. Consequent energy intake gaps were substituted with additional legume, fruit and vegetable intakes. To quantify the consequent GHG emission outcomes associated with Hong Kong meat intakes, Cradle-to-ready-to-eat lifecycle assessment emission outcome modelling was used. Comparative risk assessment of burden of disease model was used to quantify the health outcomes. This study found adjusting meat intakes to recommended levels could reduce Hong Kong GHG emission by 17%-44% when compared against baseline meat intake emissions, and prevent 2,519 to 7,012 premature deaths in males and 53 to 1,342 in females, as well as multiple burden of diseases when compared to the baseline meat intake scenario. Comparing lump sum meat intake reduction and outcome measures across the entire population, and using emission factors, and relative risks from individual studies in previous co-benefit studies, this study used age- and sex-specific input and output measures, emission factors and relative risks obtained from high quality meta-analysis and meta-review respectively, and has taken government dietary recommendations into account. Hence evaluations in this study are of better quality and more reflective of real life practices. Further to previous co-benefit studies, this study pinpointed age- and sex-specific population and meat-type-specific intervention points and leverages. When compared with similar studies in Australia, this study also showed that intervention points and leverages among populations in different geographic and cultural background could be different, and that globalization also globalizes meat consumption emission effects. More regional and cultural specific evaluations are recommended to promote more sustainable meat consumption and enhance global food security.

Keywords: burden of diseases, greenhouse gas emissions, Hong Kong diet, sustainable meat consumption

Procedia PDF Downloads 291
153 Challenges in Self-Managing Vitality: A Qualitative Study about Staying Vital at Work among Dutch Office Workers

Authors: Violet Petit-Steeghs, Jochem J. R. Van Roon, Jacqueline E. W. Broerse

Abstract:

Last decennia the retirement age in Europe is gradually increasing. As a result, people have to continue working for a longer period of time. Health problems due to increased sedentary behavior and mental conditions like burn-out, pose a threat in fulfilling employees’ working life. In order to stimulate the ability and willingness to work in the present and future, it is important to stay vital. Vitality is regarded in literature as a sense of energy, motivation and resilience. It is assumed that by increasing their vitality, employees will stay healthier and be more satisfied with their job, leading to a more sustainable employment and less absenteeism in the future. The aim of this project is to obtain insights into the experiences and barriers of employees, and specifically office workers, with regard to their vitality. These insights are essential in order to develop appropriate measures in the future. To get more insights in the experiences of office workers on their vitality, 8 focus group discussions were organized with 6-10 office workers from 4 different employers (an university, a national construction company and a large juridical and care service organization) in the Netherlands. The discussions were transcribed and analyzed via open coding. This project is part of a larger consortium project Provita2, and conducted in collaboration with University of Technology Eindhoven. Results showed that a range of interdependent factors form a complex network that influences office workers’ vitality. These factors can be divided in three overarching groups: (1) personal (2) organizational and (3) environmental factors. Personal intrinsic factors, relating to the office worker, comprise someone’s physical health, coping style, life style, needs, and private life. Organizational factors, relating to the employer, are the workload, management style and the structure, vision and culture of the organization. Lastly, environmental factors consist of the air, light, temperature at the workplace and whether the workplace is inspiring and workable. Office workers experienced barriers to improve their own vitality due to a lack of autonomy. On the one hand, because most factors were not only intrinsic but extrinsic, like work atmosphere or the temperature in the room. On the other hand, office workers were restricted in adapting both intrinsic as well as extrinsic factors. Restrictions to for instance the flexibility of working times and the workload, can set limitations for improving vitality through personal factors like physical activity and mental relaxation. In conclusion, a large range of interdependent factors influence the vitality of office workers. Office workers are often regarded to have a responsibility to improve their vitality, but are limitedly autonomous in adapting these factors. Measures to improve vitality should therefore not only focus on increasing awareness among office workers, but also on empowering them to fulfill this responsibility. A holistic approach that takes the complex mutual dependencies between the different factors and actors (like managers, employees and HR personnel) into account is highly recommended.

Keywords: occupational health, perspectives office workers, sustainable employment, vitality at work, work & wellbeing

Procedia PDF Downloads 114
152 Vitamin B9 Separation by Synergic Pertraction

Authors: Blaga Alexandra Cristina, Kloetzer Lenuta, Bompa Amalia Stela, Galaction Anca Irina, Cascaval Dan

Abstract:

Vitamin B9 is an important member of vitamins B group, being a growth factor, important for making genetic material as DNA and RNA, red blood cells, for building muscle tissues, especially during periods of infancy, adolescence and pregnancy. Its production by biosynthesis is based on the high metabolic potential of mutant Bacillus subtilis, due to a superior biodisponibility compared to that obtained by chemical pathways. Pertraction, defined as the extraction and transport through liquid membranes consists in the transfer of a solute between two aqueous phases of different pH-values, phases that are separated by a solvent layer of various sizes. The pertraction efficiency and selectivity could be significantly enhanced by adding a carrier in the liquid membrane, such as organophosphoric compounds, long chain amines or crown-ethers etc., the separation process being called facilitated pertraction. The aim of the work is to determine the impact of the presence of two extractants/carriers in the bulk liquid membrane, i.e. di(2-ethylhexyl) phosphoric acid (D2EHPA) and lauryltrialkylmetilamine (Amberlite LA2) on the transport kinetics of vitamin B9. The experiments have been carried out using two pertraction equipments for a free liquid membrane or bulk liquid membrane. One pertraction cell consists on a U-shaped glass pipe (used for the dichloromethane membrane) and the second one is an H-shaped glass pipe (used for h-heptane), having 45 mm inner diameter of the total volume of 450 mL, the volume of each compartment being of 150 mL. The aqueous solutions are independently mixed by means of double blade stirrers with 6 mm diameter and 3 mm height, having the rotation speed of 500 rpm. In order to reach high diffusional rates through the solvent layer, the organic phase has been mixed with a similar stirrer, at a similar rotation speed (500 rpm). The area of mass transfer surface, both for extraction and for reextraction, was of 1.59x10-³ m2. The study on facilitated pertraction with the mixture of two carriers, namely D2EHPA and Amberlite LA-2, dissolved in two solvents with different polarities: n-heptane and dichloromethane, indicated the possibility to obtain the synergic effect. The synergism has been analyzed by considering the vitamin initial and final mass flows, as well as the permeability factors through liquid membrane. The synergic effect has been observed at low D2EHPA concentrations and high Amberlite LA-2 concentrations, being more important for the low-polar solvent (n-heptane). The results suggest that the mechanism of synergic pertraction consists on the reaction between the organophosphoric carrier and vitamin B9 at the interface between the feed and membrane phases, while the aminic carrier enhances the hydrophobicity of this compound by solvation. However, the formation of this complex reduced the reextraction rate and, consequently, affects the synergism related to the final mass flows and permeability factor. For describing the influences of carriers concentrations on the synergistic coefficients, some equations have been proposed by taking into account the vitamin mass flows or permeability factors, with an average deviations between 4.85% and 10.73%.

Keywords: pertraction, synergism, vitamin B9, Amberlite LA-2, di(2-ethylhexyl) phosphoric acid

Procedia PDF Downloads 246
151 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 185
150 The Influence of Perinatal Anxiety and Depression on Breastfeeding Behaviours: A Qualitative Systematic Review

Authors: Khulud Alhussain, Anna Gavine, Stephen Macgillivray, Sushila Chowdhry

Abstract:

Background: Estimates show that by the year 2030, mental illness will account for more than half of the global economic burden, second to non-communicable diseases. Often, the perinatal period is characterised by psychological ambivalence and a mixed anxiety-depressive condition. Maternal mental disorder is associated with perinatal anxiety and depression and affects breastfeeding behaviors. Studies also indicate that maternal mental health can considerably influence a baby's health in numerous aspects and impact the newborn health due to lack of adequate breastfeeding. However, studies reporting factors associated with breastfeeding behaviors are predominantly quantitative. Therefore, it is not clear what literature is available to understand the factors affecting breastfeeding and perinatal women’s perspectives and experiences. Aim: This review aimed to explore the perceptions and experiences of women with perinatal anxiety and depression, as well as how these experiences influence their breastfeeding behaviours. Methods: A systematic literature review of qualitative studies in line with the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ). Four electronic databases (CINAHL, PsycINFO, Embase, and Google Scholar) were explored for relevant studies using a search strategy. The search was restricted to studies published in the English language between 2000 and 2022. Findings from the literature were screened using a pre-defined screening criterion and the quality of eligible studies was appraised using the Walsh and Downe (2006) checklist. Findings were extracted and synthesised based on Braun and Clark. The review protocol was registered on PROSPERO (Ref: CRD42022319609). Result: A total of 4947 studies were identified from the four databases. Following duplicate removal and screening 16 studies met the inclusion criteria. The studies included 87 pregnant and 302 post-partum women from 12 countries. The participants were from a variety of economic, regional, and religious backgrounds, mainly from the age of 18 to 45 years old. Three main themes were identified: Barriers to breastfeeding, breastfeeding facilitators, emotional disturbance, and breastfeeding. Seven subthemes emerged from the data: expectation versus reality, uncertainly about maternal competencies, body image and breastfeeding, lack of sufficient breastfeeding support for family and caregivers’ support, influences positive breastfeeding practices, breastfeeding education, and causes of mental strain among breastfeeding women. Breastfeeding duration is affected in women with mental health disorders, irrespective of their desire to breastfeed. Conclusion: There is significant empirical evidence that breastfeeding behaviour and perinatal mental disturbance are linked. However, there is a lack of evidence to apply the findings to Saudi women due to lack of empirical qualitative information. To improve the psychological well-being of mothers, it is crucial to explore and recognise any concerns with their mental, physical, and emotional well-being. Therefore, robust research is needed so that breastfeeding intervention researchers and policymakers can focus on specifically what needs to be done to help mentally distressed perinatal women and their new-born.

Keywords: pregnancy, perinatal period, anxiety, depression, emotional disturbance, breastfeeding

Procedia PDF Downloads 63
149 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 370
148 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective

Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter

Abstract:

With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.

Keywords: digital media, mobile media, social media, technology, sports sponsorship

Procedia PDF Downloads 266
147 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel

Authors: Valentina Dolmova

Abstract:

What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.

Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology

Procedia PDF Downloads 133
146 Dynamics of Protest Mobilization and Rapid Demobilization in Post-2001 Afghanistan: Facing Enlightening Movement

Authors: Ali Aqa Mohammad Jawad

Abstract:

Taking a relational approach, this paper analyzes the causal mechanisms associated with successful mobilization and rapid demobilization of the Enlightening Movement in post-2001 Afghanistan. The movement emerged after the state-owned Da Afghan Bereshna Sherkat (DABS) decided to divert the route for the Turkmenistan-Uzbekistan-Tajikistan-Afghanistan-Pakistan (TUTAP) electricity project. The grid was initially planned to go through the Hazara-inhabited province of Bamiyan, according to Afghanistan’s Power Sector Master Plan. The reroute served as an aide-mémoire of historical subordination to other ethno-religious groups for the Hazara community. It was also perceived as deprivation from post-2001 development projects, financed by international aid. This torched the accumulated grievances, which then gave birth to the Enlightening Movement. The movement had a successful mobilization. However, it demobilized after losing much of its mobilizing capabilities through an amalgamation of external and internal relational factors. The successful mobilization yet rapid demobilization constitutes the puzzle of this paper. From the theoretical perspective, this paper is significant as it establishes the applicability of contentious politics theory to protest mobilizations that occurred in Afghanistan, a context-specific, characterized by ethnic politics. Both primary and secondary data are utilized to address the puzzle. As for the primary resources, media coverage, interviews, reports, public media statements of the movement, involved in contentious performances, and data from Social Networking Services (SNS) are used. The covered period is from 2001-2018. As for the secondary resources, published academic articles and books are used to give a historical account of contentious politics. For data analysis, a qualitative comparative historical method is utilized to uncover the causal mechanisms associated with successful mobilization and rapid demobilization of the Movement. In this pursuit, both mobilization and demobilization are considered as larger political processes that could be decomposed to constituent mechanisms. Enlightening Movement’s framing and campaigns are first studied to uncover the associated mechanisms. Then, to avoid introducing some ad hoc mechanisms, the recurrence of mechanisms is checked against another case. Mechanisms qualify as robust if they are “recurrent” in different episodes of contention. Checking the recurrence of causal mechanisms is vital as past contentious events tend to reinforce future events. The findings of this paper suggest that the public sphere in Afghanistan is drastically different from Western democracies known as the birthplace of social movements. In Western democracies, when institutional politics did not respond, movement organizers occupied the public sphere, undermining the legitimacy of the government. In Afghanistan, the public sphere is ethicized. Considering the inter- and intra-relational dynamics of ethnic groups in Afghanistan, the movement reduced to an erosive inter- and intra-ethnic conflict. This undermined the cohesiveness of the movement, which then kicked-off its demobilization process.

Keywords: enlightening movement, contentious politics, mobilization, demobilization

Procedia PDF Downloads 166
145 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 63
144 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a CriticalEthical English Pedagogy

Authors: Farah Vierra

Abstract:

Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth”, defined by Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those who these truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential in providing marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate has been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of which is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to devise instructional strategies that not only teach students the ways they can critically and ethically scrutinise truth claims but also teach them to mediate the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In hopes of providing educators with the roadmap to do so, this paper will first examine the challenges that confront students as a result of post-truth. Following which, the paper will elucidate the role English education can play in helping students overcome the complex ramifications of post-truth. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.

Keywords: post-truth, pedagogy, ethics, English, education

Procedia PDF Downloads 48
143 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia

Abstract:

Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.

Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior

Procedia PDF Downloads 192
142 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 206
141 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors

Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic

Abstract:

If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.

Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide

Procedia PDF Downloads 245
140 A Randomized, Controlled Trial to Test Habit Formation Theory for Low Intensity Physical Exercise Promotion in Older Adults

Authors: Patrick Louie Robles, Jerry Suls, Ciaran Friel, Mark Butler, Samantha Gordon, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence finds increasing physical activity is positively associated with health benefits. Behavior change techniques (BCTs) have demonstrated some effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a personalized trials (N-of-1) design, delivered virtually, to evaluate the efficacy of using five BCTs in increasing low-intensity physical activity (by 2,000 steps of walking per day) in adults aged 45-75 years old. The 5 BCTs described in habit formation theory are goal setting, action planning, rehearsal, rehearsal in a consistent context, and self-monitoring. The study recruited health system employees in the target age range who had no mobility restrictions and expressed interest in increasing their daily activity by a minimum of 2,000 steps per day at least five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. Participants then engaged remotely with a clinical research coordinator to establish a “walking plan” that included a time and day interval (e.g., between 7am -8am on Monday-Friday), a location for the walk (e.g., park), and how much time the plan would need to achieve a minimum of 2,000 steps over their baseline average step count (20 minutes). All elements of the walking plan were required to remain consistent throughout the study. In the 10-week intervention phase of the study, participants received all five BCTs in a single, time-sensitive text message. The text message was delivered 30 minutes prior to the established walk time and signaled participants to begin walking when the context (i.e., day of the week, time of day) they pre-selected is encountered. Participants were asked to log both the start and conclusion of their activity session by pressing a button on the Fitbit tracker. Within 30 minutes of the planned conclusion of the activity session, participants received a text message with a link to a secure survey. Here, they noted whether they engaged in the BCTs when prompted and completed an automaticity survey to identify how “automatic” their walking behavior had become. At the end of their trial, participants received a personalized summary of their step data over time, helping them learn more about their responses to the five BCTs. Whether the use of these 5 ‘habit formation’ BCTs in combination elicits a change in physical activity behavior among older adults will be reported. This study will inform the feasibility of a virtually-delivered N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 113
139 Evaluation of Wheat Varieties for Water Use Efficiency under Staggering Sowing Times and Variable Irrigation Regimes under Timely and Late Sown Conditions

Authors: Vaibhav Baliyan, S. S. Parihar

Abstract:

With the rise in temperature during reproductive phase and moisture stress, winter wheat yields are likely to decrease because of limited plant growth, higher rate of night respiration, higher spikelet sterility or number of grains per spike and restricted embryo development thereby reducing grain number. Crop management practices play a pivotal role in minimizing adverse effects of terminal heat stress on wheat production. Amongst various agronomic management practices, adjusting sowing date, crop cultivars and irrigation scheduling have been realized to be simple yet powerful, implementable and eco-friendly mitigation strategies to sustain yields under elevated temperature conditions. Taking into account, large variability in wheat production in space and time, a study was conducted to identify the suitable wheat varieties under both early and late planting with suitable irrigation schedule for minimizing terminal heat stress effect and thereby improving wheat production. Experiments were conducted at research farms of Indian Agricultural Research Institute, New Delhi, India, separately for timely and late sown conditions with suitable varieties with staggering dates of sowing from 1st November to 30th November in case of timely sown and from 1st December to 31st December for late sown condition. The irrigation schedule followed for both the experiments were 100% of ETc (evapotranspiration of crop), 80% of ETc and 60% of ETc. Results of the timely sown experiment indicated that 1st November sowing resulted in higher grain yield followed by 10th November. However, delay in sowing thereafter resulted in gradual decrease in yield and the maximum reduction was noticed under 30th November sowing. Amongst the varieties, HD3086 produced higher grain yield compared to other varieties. Irrigation applied based on 100% of ETc gave higher yield comparable to 80% of ETc but both were significantly higher than 60% of ETc. It was further observed that even liberal irrigation under 100% of ETc could not compensate the yield under delayed sowing suggesting that rise in temperature beyond January adversely affected the growth and development of crop as well as forced maturity resulting in significant reduction of yield attributing characters due to terminal heat stress. Similar observations were recorded under late sown experiment too. Planting on 1st December along with 100% ETc of irrigation schedule resulted in significantly higher grain yield as compared to other dates and irrigation regimes. Further, it was observed that reduction in yield under late sown conditions was significantly large than the timely sown conditions irrespective of the variety grown and irrigation schedule followed. Delayed sowing resulted in reducing crop growth period and forced maturity in turn led to significant deterioration in all the yield attributing characters and there by reduction in yield suggesting that terminal heat stress had greater impact on yield under late sown crop than timely sown due to temperature rise coinciding with reproductive phase of the crop.

Keywords: climate, irrigation, mitigation, wheat

Procedia PDF Downloads 96
138 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 136
137 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a Critical-Ethical English Pedagogy

Authors: Farah Vierra

Abstract:

Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth,” defined by the Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief,” came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those to who this truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential to provide marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate have been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of this is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are being questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to contend with a complex question: how can the teaching of English equip students with the ability to critically and ethically scrutinise truth claims whilst also mediating the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In order to address this question, this paper will first examine the challenges that confront students as a result of post-truth. Following this, the paper will elucidate the role English education can play in helping students overcome the complex demands of the post-truth age. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.

Keywords: post-truth, pedagogy, ethics, english, education

Procedia PDF Downloads 43
136 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol

Authors: S. B. R. Slagmulder

Abstract:

Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.

Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive

Procedia PDF Downloads 46
135 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 46