Search results for: internal emergency
266 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 143265 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 231264 Condition Assessment and Diagnosis for Aging Drinking Water Pipeline According to Scientific and Reasonable Methods
Authors: Dohwan Kim, Dongchoon Ryou, Pyungjong Yoo
Abstract:
In public water facilities, drinking water distribution systems have played an important role along with water purification systems. The water distribution network is one of the most expensive components of water supply infrastructure systems. To improve the reliability for the drinking rate of tap water, advanced water treatment processes such as granular activated carbon and membrane filtration were used by water service providers in Korea. But, distrust of the people for tap water are still. Therefore, accurate diagnosis and condition assessment for water pipelines are required to supply the clean water. The internal corrosion of water pipe has increased as time passed. Also, the cross-sectional areas in pipe are reduced by the rust, deposits and tubercles. It is the water supply ability decreases as the increase of hydraulic pump capacity is required to supply an amount of water, such as the initial condition. If not, the poor area of water supply will be occurred by the decrease of water pressure. In order to solve these problems, water managers and engineers should be always checked for the current status of the water pipe, such as water leakage and damage of pipe. If problems occur, it should be able to respond rapidly and make an accurate estimate. In Korea, replacement and rehabilitation of aging drinking water pipes are carried out based on the circumstances of simply buried years. So, water distribution system management may not consider the entire water pipeline network. The long-term design and upgrading of a water distribution network should address economic, social, environmental, health, hydraulic, and other technical issues. This is a multi-objective problem with a high level of complexity. In this study, the thickness of the old water pipes, corrosion levels of the inner and outer surface for water pipes, basic data research (i.e. pipe types, buried years, accident record, embedded environment, etc.), specific resistance of soil, ultimate tensile strength and elongation of metal pipes, samples characteristics, and chemical composition analysis were performed about aging drinking water pipes. Samples of water pipes used in this study were cement mortar lining ductile cast iron pipe (CML-DCIP, diameter 100mm) and epoxy lining steel pipe (diameter 65 and 50mm). Buried years of CML-DCIP and epoxy lining steel pipe were respectively 32 and 23 years. The area of embedded environment was marine reclamation zone since 1940’s. The result of this study was that CML-DCIP needed replacement and epoxy lining steel pipe was still useful.Keywords: drinking water distribution system, water supply, replacement, rehabilitation, water pipe
Procedia PDF Downloads 258263 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting
Authors: J. Bastos, P. Marques, S. Batterman, F. Freire
Abstract:
Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.Keywords: environmental impacts, LCA, Lisbon, transport
Procedia PDF Downloads 362262 The Influence of Newest Generation Butyrate Combined with Acids, Medium Chain Fatty Acids and Plant Extract on the Performance and Physiological State of Laying Hens
Authors: Vilma Sasyte, Vilma Viliene, Asta Raceviciute-Stupeliene, Agila Dauksiene, Romas Gruzauskas, Virginijus Slausgalvis, Jamal Al-Saifi
Abstract:
The aim of the present study was to investigate the effect of butyrate, acids, medium-chain fatty acids and plant extract mixture on performance, blood and gastrointestinal tract characteristics of laying hens’. For the period of 8 weeks, 24 Hisex Brown laying hens were randomly assigned to 2 dietary treatments: 1) control wheat-corn-soybean meal based diet (Control group), 2) control diet supplemented with the mixture of butyrate, acids, medium chain fatty acids and plant extract (Lumance®) at the level of 1.5 g/kg of feed (Experimental group). Hens were fed with a crumbled diet at 125 g per day. Housing and feeding conditions were the same for all groups and met the requirements of growth for laying hens of Hisex Brown strain. In the blood serum total protein, bilirubin, cholesterol, DTL- and MTL- cholesterol, triglycerides, glucose, GGT, GOT, GPT, alkaline phosphatase, alpha amylase, contents of c-reactive protein, uric acid, and lipase were analyzed. Development of intestines and internal organs (intestinal length, intestinal weight, the weight of glandular and muscular stomach, pancreas, heart, and liver) were determined. The concentration of short chain fatty acids in caecal content was measured using the method of HPLC. The results of the present study showed that 1.5 g/kg supplementation of feed additive affected egg production and feed conversion ratio for the production of 1 kg of egg mass. Dietary supplementation of analyzed additive in the diets increased the concentration of triglycerides, GOT, alkaline phosphatase and decreased uric acid content compared with the control group (P<0.05). No significant difference for others blood indices in comparison to the control was observed. The addition of feed additives in laying hens’ diets increased intestinal weight by 11% and liver weight by 14% compared with the control group (P<0.05). The short chain fatty acids (propionic, acetic and butyric acids) in the caecum of laying hens in experimental groups decreased compared with the control group. The supplementation of the mixture of butyrate, acids, medium-chain fatty acids and plant extract at the level of 1.5 g/kg in the laying hens’ diets had the effect on the performance, some gastrointestinal tract function and blood parameters of laying hens.Keywords: acids, butyrate, laying hens, MCFA, performance, plant extract, psysiological state
Procedia PDF Downloads 296261 Comparing the Knee Kinetics and Kinematics during Non-Steady Movements in Recovered Anterior Cruciate Ligament Injured Badminton Players against an Uninjured Cohort: Case-Control Study
Authors: Anuj Pathare, Aleksandra Birn-Jeffery
Abstract:
Background: The Anterior Cruciate Ligament(ACL) helps stabilize the knee joint minimizing tibial anterior translation. Anterior Cruciate Ligament (ACL) injury is common in racquet sports and often occurs due to sudden acceleration, deceleration or changes of direction. This mechanism in badminton most commonly occurs during landing after an overhead stroke. Knee biomechanics during dynamic movements such as walking, running and stair negotiation, do not return to normal for more than a year after an ACL reconstruction. This change in the biomechanics may lead to re-injury whilst performing non-steady movements during sports, where these injuries are most prevalent. Aims: To compare if the knee kinetics and kinematics in ACL injury recovered athletes return to the same level as those from an uninjured cohort during standard movements used for clinical assessment and badminton shots. Objectives: The objectives of the study were to determine: Knee valgus during the single leg squat, vertical drop jump, net shot and drop shot; Degree of internal or external rotation during the single leg squat, vertical drop jump, net shot and drop shot; Maximum knee flexion during the single leg squat, vertical drop jump and net shot. Methods: This case-control study included 14 participants with three ACL injury recovered athletes and 11 uninjured participants. The participants performed various functional tasks including vertical drop jump, single leg squat; the forehand net shot and the forehand drop shot. The data was analysed using the two-way ANOVA test, and the reliability of the data was evaluated using the Intra Class Coefficient. Results: The data showed a significant decrease in the range of knee rotation in ACL injured participants as compared to the uninjured cohort (F₇,₅₅₆=2.37; p=0.021). There was also a decrease in the maximum knee flexion angles and an increase in knee valgus angles in ACL injured participants although they were not statistically significant. Conclusion: There was a significant decrease in the knee rotation angles in the ACL injured participants which could be a potential cause for re-injury in these athletes in the future. Although the results for decrease in maximum knee flexion angles and increase in knee valgus angles were not significant, this may be due to a limited sample of ACL injured participants; there is potential for it to be identified as a variable of interest in the rehabilitation of ACL injuries. These changes in the knee biomechanics could be vital in the rehabilitation of ACL injured athletes in the future, and an inclusion of sports based tasks, e.g., Net shot along with standard protocol movements for ACL assessment would provide a better measure of the rehabilitation of the athlete.Keywords: ACL, biomechanics, knee injury, racquet sport
Procedia PDF Downloads 174260 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 155259 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements
Authors: Dragan Ribarić
Abstract:
We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements
Procedia PDF Downloads 312258 The Adaptive Role of Negative Emotions in Optimal Functioning
Authors: Brianne Nichols, John A. Parkinson
Abstract:
Positive Psychology has provided a rich understanding of the beneficial effects of positive emotions in relation to optimal functioning, and research has been devoted to promote states of positive feeling and thinking. While this is a worthwhile pursuit, positive emotions are not useful in all contexts - some situations may require the individual to make use of their negative emotions to reach a desired end state. To account for the potential value of a wider range of emotional experiences that are common to the human condition, Positive Psychology needs to expand its horizons and investigate how individuals achieve positive outcomes using varied means. The current research seeks to understand the positive psychology of fear of failure (FF), which is a commonly experienced negative emotion relevant to most life domains. On the one hand, this emotion has been linked with avoidance motivation and self-handicap behaviours, on the other; FF has been shown to act as a drive to move the individual forward. To fully capture the depth of this highly subjective emotional experience and understand the circumstances under which FF may be adaptive, this study adopted a mixed methods design using SenseMaker; a web-based tool that combines the richness of narratives with the objectivity of numerical data. Two hundred participants consisting mostly of undergraduate university students shared a story of a time in the recent past when they feared failure of achieving a valued goal. To avoid researcher bias in the interpretation of narratives, participants self-signified their stories in a tagging system that was based on researchers’ aim to explore the role of past failures, the cognitive, emotional and behavioural profile of individuals high and low in FF, and the relationship between these factors. In addition, the role of perceived personal control and self-esteem were investigated in relation to FF using self-report questionnaires. Results from quantitative analyses indicated that individuals with high levels of FF, compared to low, were strongly influenced by past failures and preoccupied with their thoughts and emotions relating to the fear. This group also reported an unwillingness to accept their internal experiences, which in turn was associated with withdrawal from goal pursuit. Furthermore, self-esteem was found to mediate the relationship between perceived control and FF, suggesting that self-esteem, with or without control beliefs, may have the potential to buffer against high FF. It is hoped that the insights provided by the current study will inspire future research to explore the ways in which ‘acceptance’ may help individuals keep moving towards a goal despite the presence of FF, and whether cultivating a non-contingent self-esteem is the key to resilience in the face of failures.Keywords: fear of failure, goal-pursuit, negative emotions, optimal functioning, resilience
Procedia PDF Downloads 195257 Acrylamide Concentration in Cakes with Different Caloric Sweeteners
Authors: L. García, N. Cobas, M. López
Abstract:
Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.Keywords: beet sugar, cane sugar, panela, yogurt cake
Procedia PDF Downloads 66256 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 99255 Factors Influencing the Uptake of Vaccinations amongst Pregnant Women Following the COVID-19 Pandemic
Authors: Jo Parsons, Cath Grimley, Debra Bick, Sarah Hillman, Louise Clarke, Helen Atherton
Abstract:
The problem: Vaccinations are routinely offered to pregnant women in the UK for influenza (flu), pertussis (whooping cough), and COVID-19, yet the uptake of these vaccinations in pregnancy remains low. Pregnant women are at increased risk of hospitalisation, morbidity, and mortality from these preventable illnesses, which can also expose their unborn babies to an increased risk of serious complications, including in utero death. This research aims to explore how pregnant women feel about vaccinations offered during pregnancy (flu, whooping cough, and COVID-19), particularly following the COVID-19 pandemic. It also aims to examine factors influencing women’s decisions about vaccinations during pregnancy and how they feel about their health and vulnerabilities to illness arising from the COVID-19 pandemic. The approach: This is a qualitative study involving semi-structured interviews with pregnant women and midwives in the UK. Interviews with pregnant women explored their views since the COVID-19 pandemic about vaccinations offered during pregnancy and whether the pandemic has influenced perceptions of vulnerability to illness in pregnant women. Interviews with midwives explored vaccination discussions they routinely have with pregnant women and identified some of the barriers to vaccination that pregnant women discuss with them. Pregnant women were recruited via participating hospitals and community groups. Midwives were recruited via participating hospitals and midwife-specific social media groups. All interviews were conducted remotely (using telephone or Microsoft Teams) and analysed using thematic analysis. Findings: 43 pregnant women and 16 midwives were recruited and interviewed. The findings presented will focus on data from pregnant women. Pregnant women reported a wide range of views and vaccination behaviour, and identified several factors influencing their decision whether to accept vaccinations or not. These included internal factors (comprised of beliefs about susceptibility to illness, perceptions of immunity, fear, and feelings of responsibility), other influences (including visibility of illness and external influences such as healthcare professional recommendations), vaccination-related factors (comprised of beliefs about effectiveness and safety of vaccinations, availability and accessibility of vaccinations and preferences for alternative forms of protection to vaccination) and COVID-19 specific factors (including COVID-19 vaccinations and COVID-19 specific influences). Implications: Findings identified some of the factors that affect pregnant women’s decisions when deciding to have a vaccination or not and how these decisions have been influenced by COVID-19. Findings highlight areas where healthcare professional advice needs to focus, such as the provision of information about the increased vulnerability to illnesses during pregnancy and consideration of opportunistic vaccination at hospital appointments to maximise uptake of vaccinations during pregnancy. Findings of this study will inform the development of an intervention to increase vaccination uptake amongst pregnant women.Keywords: vaccination, pregnancy, qualitative, interviews, COVID-19
Procedia PDF Downloads 96254 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 86253 Utilizing Extended Reality in Disaster Risk Reduction Education: A Scoping Review
Authors: Stefano Scippo, Damiana Luzzi, Stefano Cuomo, Maria Ranieri
Abstract:
Background: In response to the rise in natural disasters linked to climate change, numerous studies on Disaster Risk Reduction Education (DRRE) have emerged since the '90s, mainly using a didactic transmission-based approach. Effective DRRE should align with an interactive, experiential, and participatory educational model, which can be costly and risky. A potential solution is using simulations facilitated by eXtended Reality (XR). Research Question: This study aims to conduct a scoping review to explore educational methodologies that use XR to enhance knowledge among teachers, students, and citizens about environmental risks, natural disasters (including climate-related ones), and their management. Method: A search string of 66 keywords was formulated, spanning three domains: 1) education and target audience, 2) environment and natural hazards, and 3) technologies. On June 21st, 2023, the search string was used across five databases: EBSCOhost, IEEE Xplore, PubMed, Scopus, and Web of Science. After deduplication and removing papers without abstracts, 2,152 abstracts (published between 2013 and 2023) were analyzed and 2,062 papers were excluded, followed by the exclusion of 56 papers after full-text scrutiny. Excluded studies focused on unrelated technologies, non-environmental risks, and lacked educational outcomes or accessible texts. Main Results: The 34 reviewed papers were analyzed for context, risk type, research methodology, learning objectives, XR technology use, outcomes, and educational affordances of XR. Notably, since 2016, there has been a rise in scientific publications, focusing mainly on seismic events (12 studies) and floods (9), with a significant contribution from Asia (18 publications), particularly Japan (7 studies). Methodologically, the studies were categorized into empirical (26) and non-empirical (8). Empirical studies involved user or expert validation of XR tools, while non-empirical studies included systematic reviews and theoretical proposals without experimental validation. Empirical studies were further classified into quantitative, qualitative, or mixed-method approaches. Six qualitative studies involved small groups of users or experts, while 20 quantitative or mixed-method studies used seven different research designs, with most (17) employing a quasi-experimental, one-group post-test design, focusing on XR technology usability over educational effectiveness. Non-experimental studies had methodological limitations, making their results hypothetical and in need of further empirical validation. Educationally, the learning objectives centered on knowledge and skills for surviving natural disaster emergencies. All studies recommended XR technologies for simulations or serious games but did not develop comprehensive educational frameworks around these tools. XR-based tools showed potential superiority over traditional methods in teaching risk and emergency management skills. However, conclusions were more valid in studies with experimental designs; otherwise, they remained hypothetical without empirical evidence. The educational affordances of XR, mainly user engagement, were confirmed by the studies. Authors’ Conclusions: The analyzed literature lacks specific educational frameworks for XR in DRRE, focusing mainly on survival knowledge and skills. There is a need to expand educational approaches to include uncertainty education, developing competencies that encompass knowledge, skills, and attitudes like risk perception.Keywords: disaster risk reduction education, educational technologies, scoping review, XR technologies
Procedia PDF Downloads 24252 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming
Authors: Nicole Vlasman, Karamjeet Dhillon
Abstract:
Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology
Procedia PDF Downloads 155251 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311250 Passive Greenhouse Systems in Poland
Authors: Magdalena Grudzińska
Abstract:
Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions
Procedia PDF Downloads 367249 Reflective Portfolio to Bridge the Gap in Clinical Training
Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab
Abstract:
Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education
Procedia PDF Downloads 85248 The Relationship Between Military Expenditure and International Trade: A Selection of African Countries
Authors: Andre C Jordaan
Abstract:
The end of the Cold War and rivalry between super powers has changed the nature of military build-up in many countries. A call from international institutions like the United Nations, International Monetary Fund and the World Bank to reduce the levels of military expenditure was the order of the day. However, this bid to cut military expenditure has not been forthright. Recently, active armed conflicts occurred in at least 46 states in 2021 with 8 in the Americas, 9 in Asia and Oceania, 3 in Europe, 8 in the Middle East and North Africa and 18 in sub-Saharan Africa. Global military expenditure in 2022 was estimated to be US$2,2 trillion, representing 2.2 per cent of global gross domestic product. Particularly sharp rises in military spending have followed in African countries and the Middle East. Global military expenditure currently follows two divergent trends, either a declining trend in the West caused mainly by austerity, efforts to control budget deficits and the wrapping up of prolonged wars. However, some parts of the world shows an increasing trend on the back of security concerns, geopolitical ambitions and some internal political factors. Conflict related fatalities in sub-Saharan Africa alone increased by 19 per cent between 2020 and 2021. The interaction between military expenditure (read conflict) and international trade is generally the cause of much debate. Some argue that countries’ fear of losing trade opportunities causes political decision makers to refrain from engaging in conflict when important trading partners are involved. However, three main arguments are always present when discussing the relationship between military expenditure or conflicts and international trade: Free trade could promote peaceful cooperation, it could trigger tension between trading blocs and partners, and trade could have no effect because conflict is based on issues that are more important. Military expenditure remains an important element of the overall government expenditure in many African countries. On the other hand, numerous researchers perceive increased international trade to be one of the main factors promoting economic growth in these countries. The purpose of this paper is therefore to determine what effect, if any, exist between the level of military expenditure and international trade within a selection of 19 African countries. Applying an augmented gravity model to explore the relationship between military expenditure and international trade, evidence is found to confirm the existence of an inverse relationship between these two variables. It seems that the results are in line with the Liberal school of thought where trade is seen as an instrument of conflict prevention. Trade is therefore perceived as a symptom of peace and not a cause thereof. In general, conflict or rumors of conflict tend to reduce trade. If conflict did not impede trade, economic agents would be indifferent to risk. Many claim that trade brings peace, however, it seems that it is rather peace that brings trade. From the results, it appears that trade reduces the risk of conflict and that conflict reduces trade.Keywords: African countries, conflict, international trade, military expenditure
Procedia PDF Downloads 65247 Recurrent Torsades de Pointes Post Direct Current Cardioversion for Atrial Fibrillation with Rapid Ventricular Response
Authors: Taikchan Lildar, Ayesha Samad, Suraj Sookhu
Abstract:
Atrial fibrillation with rapid ventricular response results in the loss of atrial kick and shortened ventricular filling time, which often leads to decompensated heart failure. Pharmacologic rhythm control is the treatment of choice, and patients frequently benefit from the restoration of sinus rhythm. When pharmacologic treatment is unsuccessful or a patient declines hemodynamically, direct cardioversion is the treatment of choice. Torsades de pointes or “twisting of the points'' in French, is a rare but under-appreciated risk of cardioversion therapy and accounts for a significant number of sudden cardiac death each year. A 61-year-old female with no significant past medical history presented to the Emergency Department with worsening dyspnea. An electrocardiogram showed atrial fibrillation with rapid ventricular response, and a chest X-ray was significant for bilateral pulmonary vascular congestion. Full-dose anticoagulation and diuresis were initiated with moderate improvement in symptoms. A transthoracic echocardiogram revealed biventricular systolic dysfunction with a left ventricular ejection fraction of 30%. After consultation with an electrophysiologist, the consensus was to proceed with the restoration of sinus rhythm, which would likely improve the patient’s heart failure symptoms and possibly the ejection fraction. A transesophageal echocardiogram was negative for left atrial appendage thrombus; the patient was treated with a loading dose of amiodarone and underwent successful direct current cardioversion with 200 Joules. The patient was placed on telemetry monitoring for 24 hours and was noted to have frequent premature ventricular contractions with subsequent degeneration to torsades de pointes. The patient was found unresponsive and pulseless; cardiopulmonary resuscitation was initiated with cardioversion, and return of spontaneous circulation was achieved after four minutes to normal sinus rhythm. Post-cardiac arrest electrocardiogram showed sinus bradycardia with heart-rate corrected QT interval of 592 milliseconds. The patient continued to have frequent premature ventricular contractions and required two additional cardioversions to achieve a return of spontaneous circulation with intravenous magnesium and lidocaine. An automatic implantable cardioverter-defibrillator was subsequently implanted for secondary prevention of sudden cardiac death. The backup pacing rate of the automatic implantable cardioverter-defibrillator was set higher than usual in an attempt to prevent premature ventricular contractions-induced torsades de pointes. The patient did not have any further ventricular arrhythmias after implantation of the automatic implantable cardioverter-defibrillator. Overdrive pacing is a method utilized to treat premature ventricular contractions-induced torsades de pointes by preventing a patient’s susceptibility to R on T-wave-induced ventricular arrhythmias. Pacing at a rate of 90 beats per minute succeeded in controlling the arrhythmia without the need for traumatic cardiac defibrillation. In our patient, conversion of atrial fibrillation with rapid ventricular response to normal sinus rhythm resulted in a slower heart rate and an increased probability of premature ventricular contraction occurring on the T-wave and ensuing ventricular arrhythmia. This case highlights direct current cardioversion for atrial fibrillation with rapid ventricular response resulting in persistent ventricular arrhythmia requiring an automatic implantable cardioverter-defibrillator placement with overdrive pacing to prevent a recurrence.Keywords: refractory atrial fibrillation, atrial fibrillation, overdrive pacing, torsades de pointes
Procedia PDF Downloads 147246 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space
Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari
Abstract:
Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.Keywords: amino acid, head space, gas chromatography, total error
Procedia PDF Downloads 148245 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106244 Fuel Cells Not Only for Cars: Technological Development in Railways
Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz
Abstract:
Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.Keywords: railway, hydrogen, fuel cells, hybrid vehicles
Procedia PDF Downloads 189243 CybeRisk Management in Banks: An Italian Case Study
Authors: E. Cenderelli, E. Bruno, G. Iacoviello, A. Lazzini
Abstract:
The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.Keywords: bank, CybeRisk, information technology, risk management
Procedia PDF Downloads 232242 The Effect of the Construction Contract System by Simulating the Comparative Costs of Capital to the Financial Feasibility of the Construction of Toll Bali Mandara
Authors: Mas Pertiwi I. G. AG Istri, Sri Kristinayanti Wayan, Oka Aryawan I. Gede Made
Abstract:
Ability of government to meet the needs of infrastructure investment constrained by the size of the budget commitments for other sectors. Another barrier is the complexity of the process of land acquisition. Public Private Partnership can help bridge the investment gap by including the amount of funding from the private sector, shifted the responsibility of financing, construction of the asset, and the operation and post-project design and care to them. In principle, a construction project implementation always requires the investor as a party to provide resources in the form of funding which it must be contained in a successor agreement in the form of a contract. In general, construction contracts consist of contracts which passed in Indonesia and contract International. One source of funding used in the implementation of construction projects comes from funding that comes from the collaboration between the government and the private sector, for example with the system: BLT (Build Lease Transfer), BOT (Build Operate Transfer), BTO (Build Transfer Operate) and BOO (Build Operate Own). And form of payment under a construction contract can be distinguished several ways: monthly payment, payments based on progress and payment after completed projects (Turn Key). One of the tools used to analyze the feasibility of the investment is to use financial models. The financial model describes the relationship between different variables and assumptions used. From a financial model will be known how the cash flow structure of the project, which includes revenues, expenses, liabilities to creditors and the payment of taxes to the government. Net cash flow generated from the project will be used as a basis for analyzing the feasibility of investment source of project financing Public Private Partnership could come from equity or debt. The proportion of funding according to its source is a comparison of a number of investment funds originating from each source of financing for a total investment cost during the construction period by selected the contract system and several alternative financing percentage ratio determined according to sources will generate cash flow structure that is different. Of the various possibilities for the structure of the cash flow generated will be analyzed by software is to test T Paired to compared the contract system used by various alternatives comparison of financing to determine the effect of the contract system and the comparison of such financing for the feasibility of investment toll road construction project for the economic life of 20 (twenty) years. In this use case studies of toll road contruction project Bali Mandara. And in this analysis only covered two systems contracts, namely Build Operate Transfer and Turn Key. Based on the results obtained by analysis of the variable investment feasibility of the NPV, BCR and IRR between the contract system Build Operate Transfer and contract system Turn Key on the interest rate of 9%, 12% and 15%.Keywords: contract system, financing, internal rate of return, net present value
Procedia PDF Downloads 227241 A Q-Methodology Approach for the Evaluation of Land Administration Mergers
Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen
Abstract:
The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation
Procedia PDF Downloads 194240 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 203239 Tourism Policy Challenges in Post-Soviet Georgia
Authors: Merab Khokhobaia
Abstract:
The research of Georgian tourism policy challenges is important, as the tourism can play an increasing role for the economic growth and improvement of standard of living of the country even with scanty resources, at the expense of improved creative approaches. It is also important to make correct decisions at macroeconomic level, which will be accordingly reflected in the successful functioning of the travel companies and finally, in the improvement of economic indicators of the country. In order to correctly orient sectoral policy, it is important to precisely determine its role in the economy. Development of travel industry has been considered as one of the priorities in Georgia; the country has unique cultural heritage and traditions, as well as plenty of natural resources, which are a significant precondition for the development of tourism. Despite the factors mentioned above, the existing resources are not completely utilized and exploited. This work represents a study of subjective, as well as objective reasons of ineffective functioning of the sector. During the years of transformation experienced by Georgia, the role of travel industry in economic development of the country represented the subject of continual discussions. Such assessments were often biased and they did not rest on specific calculations. This topic became especially popular on the ground of market economy, because reliable statistical data have a particular significance in the designing of tourism policy. In order to deeply study the aforementioned issue, this paper analyzes monetary, as well as non-monetary indicators. The research widely included the tourism indicators system; we analyzed the flaws in reporting of the results of tourism sector in Georgia. Existing defects are identified and recommendations for their improvement are offered. For stable development tourism, similarly to other economic sectors, needs a well-designed policy from the perspective of national, as well as local, regional development. The tourism policy must be drawn up in order to efficiently achieve our goals, which were established in short-term and long-term dynamics on the national or regional scale of specific country. The article focuses on the role and responsibility of the state institutes in planning and implementation of the tourism policy. The government has various tools and levers, which may positively influence the processes. These levers are especially important in terms of international, as well as internal tourism development. Within the framework of this research, the regulatory documents, which are in force in relation to this industry, were also analyzed. The main attention is turned to their modernization and necessity of their compliance with European standards. It is a current issue to direct the efforts of state policy on support of business by implementing infrastructural projects, as well as by development of human resources, which may be possible by supporting the relevant higher and vocational studying-educational programs.Keywords: regional development, tourism industry, tourism policy, transition
Procedia PDF Downloads 263238 Smart Interior Design: A Revolution in Modern Living
Authors: Fatemeh Modirzare
Abstract:
Smart interior design represents a transformative approach to creating living spaces that integrate technology seamlessly into our daily lives, enhancing comfort, convenience, and sustainability. This paper explores the concept of smart interior design, its principles, benefits, challenges, and future prospects. It also highlights various examples and applications of smart interior design to illustrate its potential in shaping the way we live and interact with our surroundings. In an increasingly digitized world, the boundaries between technology and interior design are blurring. Smart interior design, also known as intelligent or connected interior design, involves the incorporation of advanced technologies and automation systems into residential and commercial spaces. This innovative approach aims to make living environments more efficient, comfortable, and adaptable while promoting sustainability and user well-being. Smart interior design seamlessly integrates technology into the aesthetics and functionality of a space, ensuring that devices and systems do not disrupt the overall design. Sustainable materials, energy-efficient systems, and eco-friendly practices are central to smart interior design, reducing environmental impact. Spaces are designed to be adaptable, allowing for reconfiguration to suit changing needs and preferences. Smart homes and spaces offer greater comfort through features like automated climate control, adjustable lighting, and customizable ambiance. Smart interior design can significantly reduce energy consumption through optimized heating, cooling, and lighting systems. Smart interior design integrates security systems, fire detection, and emergency response mechanisms for enhanced safety. Sustainable materials, energy-efficient appliances, and waste reduction practices contribute to a greener living environment. Implementing smart interior design can be expensive, particularly when retrofitting existing spaces with smart technologies. The increased connectivity raises concerns about data privacy and cybersecurity, requiring robust measures to protect user information. Rapid advancements in technology may lead to obsolescence, necessitating updates and replacements. Users must be familiar with smart systems to fully benefit from them, requiring education and ongoing support. Residential spaces incorporate features like voice-activated assistants, automated lighting, and energy management systems. Intelligent office design enhances productivity and employee well-being through smart lighting, climate control, and meeting room booking systems. Hospitals and healthcare facilities use smart interior design for patient monitoring, wayfinding, and energy conservation. Smart retail design includes interactive displays, personalized shopping experiences, and inventory management systems. The future of smart interior design holds exciting possibilities, including AI-powered design tools that create personalized spaces based on user preferences. Smart interior design will increasingly prioritize factors that improve physical and mental health, such as air quality monitoring and mood-enhancing lighting. Smart interior design is revolutionizing the way we interact with our living and working spaces. By embracing technology, sustainability, and user-centric design principles, smart interior design offers numerous benefits, from increased comfort and convenience to energy efficiency and sustainability. Despite challenges, the future holds tremendous potential for further innovation in this field, promising a more connected, efficient, and harmonious way of living and working.Keywords: smart interior design, home automation, sustainable living spaces, technological integration, user-centric design
Procedia PDF Downloads 70237 Benjaminian Translatability and Elias Canetti's Life Component: The Other German Speaking Modernity
Authors: Noury Bakrim
Abstract:
Translatability is one of Walter Benjamin’s most influential notions, it is somehow representing the philosophy of language and history of what we might call and what we indeed coined as ‘the other German Speaking Modernity’ which could be shaped as a parallel thought form to the Marxian-Hegelian philosophy of history, the one represented by the school of Frankfurt. On the other hand, we should consider the influence of the plural German speaking identity and the Nietzschian and Goethean heritage, this last being focused on a positive will of power: the humanised human being. Having in perspective the benjaminian notion of translatability (Übersetzbarkeit), to be defined as an internal permanent hermeneutical possibility as well as a phenomenological potential of a translation relation, we are in fact touching this very double limit of both historical and linguistic reason. By life component, we mean the changing conditions of genetic and neurolinguistic post-partum functions, to be grasped as an individuation beyond the historical determinism and teleology of an event. It is, so to speak, the retrospective/introspective canettian auto-fiction, the benjaminian crystallization of the language experience in the now-time of writing/transmission. Furthermore, it raises various questioning points when it comes to translatability, they are basically related to psycholinguistic separate poles, the fatherly ladino Spanish and the motherly Vienna German, but relating more in particular to the permanent ontological quest of a world loss/belonging. Another level of this quest would be the status of Veza Canetti-Taubner Calderón, german speaking Author, Canetti’s ‘literary wife’, writer’s love, his inverted logos, protective and yet controversial ‘official private life partner’, the permanence of the jewish experience in the exiled german language. It sheds light on a traumatic relation of an inadequate/possible language facing the reconstruction of an oral life, the unconscious split of the signifier and above all on the frustrating status of writing in Canetti’s work : Using a suffering/suffered written German to save his remembered acquisition of his tongue/mother tongue by saving the vanishing spoken multilingual experience. While Canetti’s only novel ‘Die Blendung’ designates that fictional referential dynamics focusing on the nazi worldless horizon: the figure of Kien is an onomastic signifier, the anti-Canetti figure, the misunderstood legacy of Kant, the system without thought. Our postulate would be the double translatability of his auto-fiction inventing the bios oral signifier basing on the new praxemes created by Canetti’s german as observed in the English, French translations of his memory corpus. We aim at conceptualizing life component and translatability as two major features of a german speaking modernity.Keywords: translatability, language biography, presentification, bioeme, life Order
Procedia PDF Downloads 426