Search results for: kidney functions and chronic renal failure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6279

Search results for: kidney functions and chronic renal failure

309 Influence of Recycled Concrete Aggregate Content on the Rebar/Concrete Bond Properties through Pull-Out Tests and Acoustic Emission Measurements

Authors: L. Chiriatti, H. Hafid, H. R. Mercado-Mendoza, K. L. Apedo, C. Fond, F. Feugeas

Abstract:

Substituting natural aggregate with recycled aggregate coming from concrete demolition represents a promising alternative to face the issues of both the depletion of natural resources and the congestion of waste storage facilities. However, the crushing process of concrete demolition waste, currently in use to produce recycled concrete aggregate, does not allow the complete separation of natural aggregate from a variable amount of adhered mortar. Given the physicochemical characteristics of the latter, the introduction of recycled concrete aggregate into a concrete mix modifies, to a certain extent, both fresh and hardened concrete properties. As a consequence, the behavior of recycled reinforced concrete members could likely be influenced by the specificities of recycled concrete aggregates. Beyond the mechanical properties of concrete, and as a result of the composite character of reinforced concrete, the bond characteristics at the rebar/concrete interface have to be taken into account in an attempt to describe accurately the mechanical response of recycled reinforced concrete members. Hence, a comparative experimental campaign, including 16 pull-out tests, was carried out. Four concrete mixes with different recycled concrete aggregate content were tested. The main mechanical properties (compressive strength, tensile strength, Young’s modulus) of each concrete mix were measured through standard procedures. A single 14-mm-diameter ribbed rebar, representative of the diameters commonly used in the domain of civil engineering, was embedded into a 200-mm-side concrete cube. The resulting concrete cover is intended to ensure a pull-out type failure (i.e. exceedance of the rebar/concrete interface shear strength). A pull-out test carried out on the 100% recycled concrete specimen was enriched with exploratory acoustic emission measurements. Acoustic event location was performed by means of eight piezoelectric transducers distributed over the whole surface of the specimen. The resulting map was compared to existing data related to natural aggregate concrete. Damage distribution around the reinforcement and main features of the characteristic bond stress/free-end slip curve appeared to be similar to previous results obtained through comparable studies carried out on natural aggregate concrete. This seems to show that the usual bond mechanism sequence (‘chemical adhesion’, mechanical interlocking and friction) remains unchanged despite the addition of recycled concrete aggregate. However, the results also suggest that bond efficiency seems somewhat improved through the use of recycled concrete aggregate. This observation appears to be counter-intuitive with regard to the diminution of the main concrete mechanical properties with the recycled concrete aggregate content. As a consequence, the impact of recycled concrete aggregate content on bond characteristics seemingly represents an important factor which should be taken into account and likely to be further explored in order to determine flexural parameters such as deflection or crack distribution.

Keywords: acoustic emission monitoring, high-bond steel rebar, pull-out test, recycled aggregate concrete

Procedia PDF Downloads 157
308 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 135
307 The Subtle Influence of Hindu Doctrines on Film Industry: A Case Study of Movie Avatar

Authors: Cemil Kutlutürk

Abstract:

Hindu culture and religious doctrines such as caste, reincarnation, yoga, nirvana have always proved a popular theme for the film industry. The analyzing of these motifs in the movies with a scientific approach enables to individuals either to comprehend the messages and deep meanings of films or to understand others’ religious beliefs systems and daily lives in a properly way. The primary aim of this study is to handle the subtle influence of Hindu doctrines on cinema industry by focusing on James Cameron’s film, Avatar and its relationship with Hindu concept of avatara by referring to original Hindu sacred texts where this doctrine is basically clarified. The Sanskrit word avatara means to come down or to descend. Although an avatara is commonly considered as an appearance of any deity on earth, the term refers the Vishnu’s descending on earth. When the movie avatar and avatara doctrine are compared, various similarities have noteworthy revealed. Firstly in the movie, Jake is chosen by Eywa to protect Pandora from evils. Similarly in the movie, avatar is born when there is a rise of jealousy and unrighteousness. The same concept is found in avatara doctrine. According to this belief whenever righteousness (dharma) wanes and unrighteousness (adharma) increases God incarnates himself as an avatara. In Hindu tradition, the ten avataras of Vishnu are the most popular. This standard list of ten avataras includes the Fish, the Tortoise, the Boar, the Man-Lion (Narasimha), the Dwarf, Parasurama, Rama, Krishna, the Buddha and Kalki. In the movie the avatar has tail, eyes, nose, ear which is similar to Narasimha (half man-half lion) avatara. On the other hand use of bow and arrow by Navis in the film, evokes us Rama avatara whose basic gun is same. Navis fly on a dragon like bird called Ikra and ride a horse-like quadruped animal. The vehicle for transformation of the avatar in the movie is also resemblance with the idea of Garuda, the great mythical bird, which is used by Vishnu in Hindu mythology. In addition, the last avatara, Kalki, will be seen on a white horse according to Puranas. The basic difference is that for Hinduism avatara means descent of a God, yet in the movie, a human being named Jake Sully, is manifested as humanoid of another planet, this is called as avatar. While in the movie the avatar manifests himself in another planet, Pandora, in Hinduism avataras descent on this world. On the other hand, in Hindu scriptures, there are many avataras and they are categorized according to their functions and attributes. These sides of avatara doctrine cannot be also seen clearly in the film. Even though there are some differences between each other, the main hypothesis of this study is that the general character of the movie is similar to avatara doctrine. In the movie instead of emphasizing on a specific avatara, qualities of different Vishnu avataras have been properly used.

Keywords: film industry, Hinduism, incarnation, James Cameron, movie avatar

Procedia PDF Downloads 376
306 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 13
305 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 71
304 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 99
303 An Engaged Approach to Developing Tools for Measuring Caregiver Knowledge and Caregiver Engagement in Juvenile Type 1 Diabetes

Authors: V. Howard, R. Maguire, S. Corrigan

Abstract:

Background: Type 1 Diabetes (T1D) is a chronic autoimmune disease, typically diagnosed in childhood. T1D puts an enormous strain on families; controlling blood-glucose in children is difficult and the consequences of poor control for patient health are significant. Successful illness management and better health outcomes can be dependent on quality of caregiving. On diagnosis, parent-caregivers face a steep learning curve as T1D care requires a significant level of knowledge to inform complex decision making throughout the day. The majority of illness management is carried out in the home setting, independent of clinical health providers. Parent-caregivers vary in their level of knowledge and their level of engagement in applying this knowledge in the practice of illness management. Enabling researchers to quantify these aspects of the caregiver experience is key to identifying targets for psychosocial support interventions, which are desirable for reducing stress and anxiety in this highly burdened cohort, and supporting better health outcomes in children. Currently, there are limited tools available that are designed to capture this information. Where tools do exist, they are not comprehensive and do not adequately capture the lived experience. Objectives: Development of quantitative tools, informed by lived experience, to enable researchers gather data on parent-caregiver knowledge and engagement, which accurately represents the experience/cohort and enables exploration of questions that are of real-world value to the cohort themselves. Methods: This research employed an engaged approach to address the problem of quantifying two key aspects of caregiver diabetes management: Knowledge and engagement. The research process was multi-staged and iterative. Stage 1: Working from a constructivist standpoint, literature was reviewed to identify relevant questionnaires, scales and single-item measures of T1D caregiver knowledge and engagement, and harvest candidate questionnaire items. Stage 2: Aggregated findings from the review were circulated among a PPI (patient and public involvement) expert panel of caregivers (n=6), for discussion and feedback. Stage 3: In collaboration with the expert panel, data were interpreted through the lens of lived experience to create a long-list of candidate items for novel questionnaires. Items were categorized as either ‘knowledge’ or ‘engagement’. Stage 4: A Delphi-method process (iterative surveys) was used to prioritize question items and generate novel questions that further captured the lived experience. Stage 5: Both questionnaires were piloted to refine wording of text to increase accessibility and limit socially desirable responding. Stage 6: Tools were piloted using an online survey that was deployed using an online peer-support group for caregivers for Juveniles with T1D. Ongoing Research: 123 parent-caregivers completed the survey. Data analysis is ongoing to establish face and content validity qualitatively and through exploratory factor analysis. Reliability will be established using an alternative-form method and Cronbach’s alpha will assess internal consistency. Work will be completed by early 2024. Conclusion: These tools will enable researchers to gain deeper insights into caregiving practices among parents of juveniles with T1D. Development was driven by lived experience, illustrating the value of engaged research at all levels of the research process.

Keywords: caregiving, engaged research, juvenile type 1 diabetes, quantified engagement and knowledge

Procedia PDF Downloads 42
302 One-Stage Conversion of Adjustable Gastric Band to One-Anastomosis Gastric Bypass Versus Sleeve Gastrectomy : A Single-Center Experience With a Short and Mid-term Follow-up

Authors: Basma Hussein Abdelaziz Hassan, Kareem Kamel, Philobater Bahgat Adly Awad, Karim Fahmy

Abstract:

Background: Laparoscopic adjustable gastric band was one of the most applied and common bariatric procedures in the last 8 years. However; the failure rate was very high, reaching approximately 60% of the patients not achieving the desired weight loss. Most patients sought another revisional surgery. In which, we compared two of the most common weight loss surgeries performed nowadays: the laparoscopic sleeve gastrectomy and laparoscopic one- anastomosis gastric bypass. Objective: To compare the weight loss and postoperative outcomes among patients undergoing conversion laparoscopic one-anastomosis gastric bypass (cOAGB) and laparoscopic sleeve gastrectomy (cSG) after a failed laparoscopic adjustable gastric band (LAGB). Patients and Methods: A prospective cohort study was conducted from June 2020 to June 2022 at a single medical center, which included 77 patients undergoing single-stage conversion to (cOAGB) vs (cSG). Patients were reassessed for weight loss, comorbidities remission, and post-operative complications at 6, 12, and 18 months. Results: There were 77 patients with failed LAGB in our study. Group (I) was 43 patients who underwent cOAGB and Group (II) was 34 patients who underwent cSG. The mean age of the cOAGB group was 38.58. While in the cSG group, the mean age was 39.47 (p=0.389). Of the 77 patients, 10 (12.99%) were males and 67 (87.01%) were females. Regarding Body mass index (BMI), in the cOAGB group the mean BMI was 41.06 and in the cSG group the mean BMI was 40.5 (p=0.042). The two groups were compared postoperative in relation to EBWL%, BMI, and the co-morbidities remission within 18 months follow-up. The BMI was calculated post-operative at three visits. After 6 months of follow-up, the mean BMI in the cOAGB group was 34.34, and the cSG group was 35.47 (p=0.229). In 12-month follow-up, the mean BMI in the cOAGB group was 32.69 and the cSG group was 33.79 (p=0.2). Finally, the mean BMI after 18 months of follow-up in the cOAGB group was 30.02, and in the cSG group was 31.79 (p=0.001). Both groups had no statistically significant values at 6 and 12 months follow-up with p-values of 0.229, and 0.2 respectively. However, patients who underwent cOAGB after 18 months of follow-up achieved lower BMI than those who underwent cSG with a statistically significant p-value of 0.005. Regarding EBWL% there was a statistically significant difference between the two groups. After 6 months of follow-up, the mean EBWL% in the cOAGB group was 35.9% and the cSG group was 33.14%. In the 12-month follow-up, the EBWL % mean in the cOAGB group was 52.35 and the cSG group was 48.76 (p=0.045). Finally, the mean EBWL % after 18 months of follow-up in the cOAGB group was 62.06 ±8.68 and in the cSG group was 55.58 ±10.87 (p=0.005). Regarding comorbidities remission; Diabetes mellitus remission was found in 22 (88%) patients in the cOAGB group and 10 (71.4%) patients in the cSG group with (p= 0.225). Hypertension remission was found in 20 (80%) patients in the cOAGB group and 14 (82.4%) patients in the cSG group with (p=1). In addition, dyslipidemia remission was found in 27(87%) patients in cOAGB group and 17(70%) patients in the cSG group with (p=0.18). Finally, GERD remission was found in about 15 (88.2%) patients in the cOAGB group and 6 (60%) patients in the cSG group with (p=0.47). There are no statistically significant differences between the two groups in the post-operative data outcomes. Conclusion: This study suggests that the conversion of LAGB to either cOAGB or cSG could be feasibly performed in a single-stage operation. cOAGB had a significant difference as regards the weight loss results than cSG among the mid-term follow-up. However, there is no significant difference in the postoperative complications and the resolution of the co-morbidities. Therefore, cOAGB could provide a reliable alternative but needs to be substantiated in future long-term studies.

Keywords: laparoscopic, gastric banding, one-anastomosis gastric bypass, Sleeve gastrectomy, revisional surgery, weight loss

Procedia PDF Downloads 37
301 Teachers’ Instructional Decisions When Teaching Geometric Transformations

Authors: Lisa Kasmer

Abstract:

Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.

Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics

Procedia PDF Downloads 74
300 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 58
299 Persistent Ribosomal In-Frame Mis-Translation of Stop Codons as Amino Acids in Multiple Open Reading Frames of a Human Long Non-Coding RNA

Authors: Leonard Lipovich, Pattaraporn Thepsuwan, Anton-Scott Goustin, Juan Cai, Donghong Ju, James B. Brown

Abstract:

Two-thirds of human genes do not encode any known proteins. Aside from long non-coding RNA (lncRNA) genes with recently-discovered functions, the ~40,000 non-protein-coding human genes remain poorly understood, and a role for their transcripts as de-facto unconventional messenger RNAs has not been formally excluded. Ribosome profiling (Riboseq) predicts translational potential, but without independent evidence of proteins from lncRNA open reading frames (ORFs), ribosome binding of lncRNAs does not prove translation. Previously, we mass-spectrometrically documented translation of specific lncRNAs in human K562 and GM12878 cells. We now examined lncRNA translation in human MCF7 cells, integrating strand-specific Illumina RNAseq, Riboseq, and deep mass spectrometry in biological quadruplicates performed at two core facilities (BGI, China; City of Hope, USA). We excluded known-protein matches. UCSC Genome Browser-assisted manual annotation of imperfect (tryptic-digest-peptides)-to-(lncRNA-three-frame-translations) alignments revealed three peptides hypothetically explicable by 'stop-to-nonstop' in-frame replacement of stop codons by amino acids in two ORFs of the lncRNA MMP24-AS1. To search for this phenomenon genomewide, we designed and implemented a novel pipeline, matching tryptic-digest spectra to wildcard-instead-of-stop versions of repeat-masked, six-frame, whole-genome translations. Along with singleton putative stop-to-nonstop events affecting four other lncRNAs, we identified 24 additional peptides with stop-to-nonstop in-frame substitutions from multiple positive-strand MMP24-AS1 ORFs. Only UAG and UGA, never UAA, stop codons were impacted. All MMP24-AS1-matching spectra met the same significance thresholds as high-confidence known-protein signatures. Targeted resequencing of MMP24-AS1 genomic DNA and cDNA from the same samples did not reveal any mutations, polymorphisms, or sequencing-detectable RNA editing. This unprecedented apparent gene-specific violation of the genetic code highlights the importance of matching peptides to whole-genome, not known-genes-only, ORFs in mass-spectrometry workflows, and suggests a new mechanism enhancing the combinatorial complexity of the proteome. Funding: NIH Director’s New Innovator Award 1DP2-CA196375 to LL.

Keywords: genetic code, lncRNA, long non-coding RNA, mass spectrometry, proteogenomics, ribo-seq, ribosome, RNAseq

Procedia PDF Downloads 212
298 Benjaminian Translatability and Elias Canetti's Life Component: The Other German Speaking Modernity

Authors: Noury Bakrim

Abstract:

Translatability is one of Walter Benjamin’s most influential notions, it is somehow representing the philosophy of language and history of what we might call and what we indeed coined as ‘the other German Speaking Modernity’ which could be shaped as a parallel thought form to the Marxian-Hegelian philosophy of history, the one represented by the school of Frankfurt. On the other hand, we should consider the influence of the plural German speaking identity and the Nietzschian and Goethean heritage, this last being focused on a positive will of power: the humanised human being. Having in perspective the benjaminian notion of translatability (Übersetzbarkeit), to be defined as an internal permanent hermeneutical possibility as well as a phenomenological potential of a translation relation, we are in fact touching this very double limit of both historical and linguistic reason. By life component, we mean the changing conditions of genetic and neurolinguistic post-partum functions, to be grasped as an individuation beyond the historical determinism and teleology of an event. It is, so to speak, the retrospective/introspective canettian auto-fiction, the benjaminian crystallization of the language experience in the now-time of writing/transmission. Furthermore, it raises various questioning points when it comes to translatability, they are basically related to psycholinguistic separate poles, the fatherly ladino Spanish and the motherly Vienna German, but relating more in particular to the permanent ontological quest of a world loss/belonging. Another level of this quest would be the status of Veza Canetti-Taubner Calderón, german speaking Author, Canetti’s ‘literary wife’, writer’s love, his inverted logos, protective and yet controversial ‘official private life partner’, the permanence of the jewish experience in the exiled german language. It sheds light on a traumatic relation of an inadequate/possible language facing the reconstruction of an oral life, the unconscious split of the signifier and above all on the frustrating status of writing in Canetti’s work : Using a suffering/suffered written German to save his remembered acquisition of his tongue/mother tongue by saving the vanishing spoken multilingual experience. While Canetti’s only novel ‘Die Blendung’ designates that fictional referential dynamics focusing on the nazi worldless horizon: the figure of Kien is an onomastic signifier, the anti-Canetti figure, the misunderstood legacy of Kant, the system without thought. Our postulate would be the double translatability of his auto-fiction inventing the bios oral signifier basing on the new praxemes created by Canetti’s german as observed in the English, French translations of his memory corpus. We aim at conceptualizing life component and translatability as two major features of a german speaking modernity.

Keywords: translatability, language biography, presentification, bioeme, life Order

Procedia PDF Downloads 414
297 Investigating the Governance of Engineering Services in the Aerospace and Automotive Industries

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Goverance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the cooperation process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with service providers in engineering R & D in the European aerospace industry. With these ideas in mind, we present research is twofold: - Understand the importance of governance as a key element of the success of the cooperation in the development of process innovations, - Establish the mechanisms and procedures to ensure the proper management of the processes of cooperation. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced engineering services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.

Keywords: innovation management, innovation governance, managing collaborative innovation, process innovation

Procedia PDF Downloads 284
296 Investigation of Ground Disturbance Caused by Pile Driving: Case Study

Authors: Thayalan Nall, Harry Poulos

Abstract:

Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.

Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening

Procedia PDF Downloads 216
295 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 76
294 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis

Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov

Abstract:

Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.

Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage

Procedia PDF Downloads 87
293 Cultural Identity of Mainland Chinese, Hongkonger and Taiwanese: A Glimpse from Hollywood Film Title Translation

Authors: Ling Yu Debbie Tsoi

Abstract:

After China has just exceeded the USA as the top Hollywood film market in 2018, Hollywood studios have been adapting the taste, preference, casting and even film title translation to resonate with the Chinese audience. Due to the huge foreign demands, Hollywood film directors are paying closer attention to the translation of their products, as film titles are entry gates to the film and serve advertising, informative, aesthetic functions. Other than film directors and studios, comments over quality film title translation also appear on various online clip viewing platforms, online media, and magazines. In particular, netizens in mainland China, Hong Kong, and Taiwan seems to defend film titles in their own region while despising the other two regions. In view of the endless debates and lack of systematic analysis on film title translation in Greater China, the study aims at investigating the translation of Hollywood film titles (from English to Chinese) across Greater China based on Venuti’s (1991; 1995; 1998; 2001) concept of domestication and foreignization. To offer a comparison over time, a mini-corpus was built comprised of the top 70 most popular Hollywood film titles in 1987- 1988, 1997- 1998, 2007- 2008 and 2017- 2018 of Greater China respectively. Altogether, 560 source texts and 1680 target texts of mainland China, Hong Kong, and Taiwan were compared against each other. The three regions are found to have a distinctive style and patterns of translation. For instance, a sizable number of film titles are foreignized in mainland China by adopting literal translation and transliteration, whereas Hong Kong and Taiwan prefer domestication. Hong Kong tends to adopt a more vulgar style by using colloquial Cantonese slangs and even swear words, associating characters with negative connotations. Also, English is used as a form of domestication in Hong Kong from 1987 till 2018. Use of English as a strategy of domestication was never found in mainland nor Taiwan. On the contrary, Taiwanese target texts tend to adopt a cute and child-like style by using repetitive words and positive connotations. Even if English was used, it was used as foreignization. As film titles represent cultural products of popular culture, it is suspected that Hongkongers would like to develop cultural identity via adopting style distinctive from mainland China by vulgarization and negativity. Hongkongers also identify themselves as international cosmopolitan, leading to their identification with English. It is also suspected that due to former colonial rule of Japan, Taiwan adopts a popular culture similar to Japan, with cute and childlike expressions.

Keywords: cultural identification, ethnic identification, Greater China, film title translation

Procedia PDF Downloads 130
292 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 153
291 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 96
290 Spexin and Fetuin A in Morbid Obese Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Spexin, expressed in central nervous system, has attracted much interest in feeding behavior, obesity, diabetes, energy metabolism and cardiovascular functions. Fetuin A is known as negative acute phase reactant synthesized in the liver. So far, it has become a major concern of many studies in numerous clinical states. The relationship between the concentrations of spexin as well as fetuin A and the risk for cardiovascular diseases (CVDs) were also investigated. Eosinophils, suggested to be associated with the development of CVDs, are introduced as early indicators of cardiometabolic complications. Patients with elevated platelet count, associated with hypercoagulable state in the body, are also more liable to CVDs. In this study, the aim is to examine the profiles of spexin and fetuin A concomitant with the course of variations detected in eosinophil as well as platelet counts in morbid obese children. Thirty-four children with normal-body mass index (N-BMI) and fifty-one morbid obese (MO) children participated in the study. Written-informed consent forms were obtained prior to the study. Institutional ethics committee approved the study protocol. Age- and sex-adjusted BMI percentile tables prepared by World Health Organization were used to classify healthy and obese children. Mean age ± SEM of the children were 9.3 ± 0.6 years and 10.7 ± 0.5 years in N-BMI and MO groups, respectively. Anthropometric measurements of the children were taken. Body mass index values were calculated from weight and height values. Blood samples were obtained after an overnight fasting. Routine hematologic and biochemical tests were performed. Within this context, fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) concentrations were measured. Homeostatic model assessment for insulin resistance (HOMA-IR) values were calculated. Spexin and fetuin A levels were determined by enzyme-linked immunosorbent assay. Data were evaluated from the statistical point of view. Statistically significant differences were found between groups in terms of BMI, fat mass index, INS, HOMA-IR and HDL-C. In MO group, all parameters increased as HDL-C decreased. Elevated concentrations in MO group were detected in eosinophils (p<0.05) and platelets (p>0.05). Fetuin A levels decreased in MO group (p>0.05). However, decrease was statistically significant in spexin levels for this group (p<0.05). In conclusion, these results have suggested that increases in eosinophils and platelets exhibit behavior as cardiovascular risk factors. Decreased fetuin A behaved as a risk factor suitable to increased risk for cardiovascular problems associated with the severity of obesity. Along with increased eosinophils, increased platelets and decreased fetuin A, decreased spexin was the parameter, which reflects best its possible participation in the early development of CVD risk in MO children.

Keywords: cardiovascular diseases , eosinophils , fetuin A , pediatric morbid obesity , platelets , spexin

Procedia PDF Downloads 176
289 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy

Authors: Kamlesh Shashikant Dave

Abstract:

The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.

Keywords: FIIs, BSE, sensex, global impact

Procedia PDF Downloads 429
288 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital

Authors: Maryamsadat Habibi

Abstract:

Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.

Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department

Procedia PDF Downloads 67
287 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 134
286 Clinical Manifestations, Pathogenesis and Medical Treatment of Stroke Caused by Basic Mitochondrial Abnormalities (Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like Episodes, MELAS)

Authors: Wu Liching

Abstract:

Aim This case aims to discuss the pathogenesis, clinical manifestations and medical treatment of strokes caused by mitochondrial gene mutations. Methods Diagnosis of ischemic stroke caused by mitochondrial gene defect by means of "next-generation sequencing mitochondrial DNA gene variation detection", imaging examination, neurological examination, and medical history; this study took samples from the neurology ward of a medical center in northern Taiwan cases diagnosed with acute cerebral infarction as the research objects. Result This case is a 49-year-old married woman with a rare disease, mitochondrial gene mutation inducing ischemic stroke. She has severe hearing impairment and needs to use hearing aids, and has a history of diabetes. During the patient’s hospitalization, the blood test showed that serum Lactate: 7.72 mmol/L, Lactate (CSF) 5.9 mmol/L. Through the collection of relevant medical history, neurological evaluation showed changes in consciousness and cognition, slow response in language expression, and brain magnetic resonance imaging examination showed subacute bilateral temporal lobe infarction, which was an atypical type of stroke. The lineage DNA gene has m.3243A>G known pathogenic mutation point, and its heteroplasmic level is 24.6%. This pathogenic point is located in MITOMAP and recorded as Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like episodes (MELAS) , Leigh Syndrome and other disease-related pathogenic loci, this mutation is located in ClinVar and recorded as Pathogenic (dbSNP: rs199474657), so it is diagnosed as a case of stroke caused by a rare disease mitochondrial gene mutation. After medical treatment, there was no more seizure during hospitalization. After interventional rehabilitation, the patient's limb weakness, poor language function, and cognitive impairment have all improved significantly. Conclusion Mitochondrial disorders can also be associated with abnormalities in psychological, neurological, cerebral cortical function, and autonomic functions, as well as problems with internal medical diseases. Therefore, the differential diagnoses cover a wide range and are not easy to be diagnosed. After neurological evaluation, medical history collection, imaging and rare disease serological examination, atypical ischemic stroke caused by rare mitochondrial gene mutation was diagnosed. We hope that through this case, the diagnosis of rare disease mitochondrial gene variation leading to cerebral infarction will be more familiar to clinical medical staff, and this case report may help to improve the clinical diagnosis and treatment for patients with similar clinical symptoms in the future.

Keywords: acute stroke, MELAS, lactic acidosis, mitochondrial disorders

Procedia PDF Downloads 53
285 Marginalisation of an Age Old Culture. The Case of Female Cultural Initiation in Some South African Cultural Groups

Authors: Lesibana Rafapa

Abstract:

Accounts exist of circumcision-anchored cultural initiation in central Africa, East Africa, Southern Africa, North Africa, and West Africa -straddling states like Botswana, Kenya, Lesotho, Malawi, Senegal, South Africa, Zambia, and Zimbabwe. This attests to the continent-wide spread of this cultural practice. In this paper, the writer relates the cultural aspect of circumcision-subsuming initiation among black African cultural groups across the continent to the notion that African cultures are varied yet subscribe to a common central concept. The premise of the paper is that the common practice of initiation for both male and female children that have to be initiated by adults to the tradition and customs of a people coincides with such a central concept. The practice of traditional initiation is as broad as to encompass aspects of spirituality, morality, and social organisation, in the nature of the central concept of which it is a trans-sectional part. Cultural initiation, sometimes referred to as traditional circumcision, constitutes culture-determined rites of passage for the initiates. The study’s aim, the findings of which are presented in this paper, was to probe gender equality in the development and promotion of the cultural practice of initiation. The researcher intended to demonstrate how in South Africa, female circumcision is treated equally or marginalised in efforts of the democratic government to regulate and strengthen the practice of circumcision as part of its broader liberation programme meant to reverse politico-cultural bondage experienced during apartheid rule that the present black regime helped bring to an end. It is argued that the failure to regard female circumcision as equal to its male counterpart is a travesty of the black government’s legislation and policies espousing equality and the protection and empowerment of vulnerable and previously marginalised population groups that include black women. The writer did a desk-top study of the history and characteristics of female circumcision among the black Northern Sotho, VaTsonga, and VhaVenda cultural groups of the Limpopo Province, stretching north to the border of South Africa with Zimbabwe, as well as literature on how political and other authorities exert efforts to preserve and empower the practice. The findings were that male initiation is foregrounded and totalised to represent the practice of initiation as a whole, at the expense of its female counterpart facing marginalisation and unequal regard. It is outlined in this paper how such impoverishment of an otherwise woman-empowering cultural practice deprives hitherto black cultures that suffered brutal repression during apartheid of a fuller recovery much needed in the democratic era. The writer applies some aspects of postcolonial theory and some tropes of feminism in the discussion of an uneven status of cultural circumcision at the hands of present day powers that be.

Keywords: African cultures, female circumcision, gender equality, women empowerment

Procedia PDF Downloads 140
284 Introducing Data-Driven Learning into Chinese Higher Education English for Academic Purposes Writing Instructional Settings

Authors: Jingwen Ou

Abstract:

Writing for academic purposes in a second or foreign language is one of the most important and the most demanding skills to be mastered by non-native speakers. Traditionally, the EAP writing instruction at the tertiary level encompasses the teaching of academic genre knowledge, more specifically, the disciplinary writing conventions, the rhetorical functions, and specific linguistic features. However, one of the main sources of challenges in English academic writing for L2 students at the tertiary level can still be found in proficiency in academic discourse, especially vocabulary, academic register, and organization. Data-Driven Learning (DDL) is defined as “a pedagogical approach featuring direct learner engagement with corpus data”. In the past two decades, the rising popularity of the application of the data-driven learning (DDL) approach in the field of EAP writing teaching has been noticed. Such a combination has not only transformed traditional pedagogy aided by published DDL guidebooks in classroom use but also triggered global research on corpus use in EAP classrooms. This study endeavors to delineate a systematic review of research in the intersection of DDL and EAP writing instruction by conducting a systematic literature review on both indirect and direct DDL practice in EAP writing instructional settings in China. Furthermore, the review provides a synthesis of significant discoveries emanating from prior research investigations concerning Chinese university students’ perception of Data-Driven Learning (DDL) and the subsequent impact on their academic writing performance following corpus-based training. Research papers were selected from Scopus-indexed journals and core journals from two main Chinese academic databases (CNKI and Wanfang) published in both English and Chinese over the last ten years based on keyword searches. Results indicated an insufficiency of empirical DDL research despite a noticeable upward trend in corpus research on discourse analysis and indirect corpus applications for material design by language teachers. Research on the direct use of corpora and corpus tools in DDL, particularly in combination with genre-based EAP teaching, remains a relatively small fraction of the whole body of research in Chinese higher education settings. Such scarcity is highly related to the prevailing absence of systematic training in English academic writing registers within most Chinese universities' EAP syllabi due to the Chinese English Medium Instruction policy, where only English major students are mandated to submit English dissertations. Findings also revealed that Chinese learners still held mixed attitudes towards corpus tools influenced by learner differences, limited access to language corpora, and insufficient pre-training on corpus theoretical concepts, despite their improvements in final academic writing performance.

Keywords: corpus linguistics, data-driven learning, EAP, tertiary education in China

Procedia PDF Downloads 34
283 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 103
282 The Architectural Conservation and Restoration Problems of Mevlevihanes

Authors: Zeynep Tanrıverdi, Ş. Barihüda Tanrıkorur

Abstract:

Mevlevihanes are the dervish lodges of the Mevlevi Sufi Order of dervishes, which was founded on the teachings of Mevlâna Jalaluddin Rumi (1207-1273) in the late 13th century in the Anatolian city of Konya, from which they were administered until 1925, when their activities together with all other sufi dervish orders, were legally prohibited after the founding of the Turkish Republic. On their closure in 1925 over 150 mevlevihane architectural complexes, which had functioned for over 600 years through the late Seljuk, Emirates, and Ottoman periods of Turkish history, were to be found in the geographic areas that had been once occupied by the Ottoman Empire. Unfortunately, because of the history of their prohibition and closure after 1925, the public developed confused negative reactions towards sufi dervish orders and their buildings occupied a nebulous political status so that their upkeep and restoration became neglected, they were used for different, inappropriate functions or were abandoned within the Turkish Republic, until a more socially objective, educated viewpoint developed in the late 1970’s and 80’s. The declaration of the Mevlevi Ayin-i Şerifi (the Ritual Whirling Ceremony of the Mevlevi Dervish Order) with its complex composed music and sema (whirling movements) performance, as a Masterpiece of the Intangible Heritage of Humanity in 2005 by UNESCO and 2007 as the International Year of Mevlâna, started an increase in studies about mevlevihanes and a wave of restorations, especially of their semahanes (the large assembly whirling halls where the Mevlevi Ritual Whirling Ceremony was performed). However, due to inadequacies in legal procedures, socio-cultural changes, economic incapacity, negative environmental factors, and faulty repair practices, the studies and applications for the protection of mevlevihanes have not reached the desired level. Within this historical perspective, this study aims to reveal the particular architectural conservation and restoration problems of mevlevihanes and propose solutions for them. Firstly, the categorization and components of mevlevihane architecture was evaluated through its historical process. Secondly, their basic architectural characteristics were explained. Thirdly, by examining recently restored examples like Manisa, Edirne, Bursa, Tokat, Gelibolu, and Çankırı Mevlevihanes, using archival documents, old maps, drawings, photos and reports, building survey method, mevlevihane architectural conservation and restoration application problems were analyzed. Finally, solution suggestions were proposed for the problems that threaten the proper restoration of mevlevihanes. It is hoped that this study will contribute to the preservation of Mevlevihanes which have played an important role in the architectural, cultural heritage of Turkey, and that their authentic values will be properly transmitted to future generations.

Keywords: conservation, cultural heritage, mevlevihane architecture, reastoration

Procedia PDF Downloads 61
281 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain

Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez

Abstract:

The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.

Keywords: monitoring, rock slope stability, water drawdown, weak rock

Procedia PDF Downloads 149
280 Impact of Ethiopia's Productive Safety Net Program on Household Dietary Diversity and Child Nutrition in Rural Ethiopia

Authors: Tagel Gebrehiwot, Carolina Castilla

Abstract:

Food insecurity and child malnutrition are among the most critical issues in Ethiopia. Accordingly, different reform programs have been carried to improve household food security. The Food Security Program (FSP) (among others) was introduced to combat the persistent food insecurity problem in the country. The FSP combines a safety net component called the Productive Safety Net Program (PSNP) started in 2005. The goal of PSNP is to offer multi-annual transfers, such as food, cash or a combination of both to chronically food insecure households to break the cycle of food aid. Food or cash transfers are the main elements of PSNP. The case for cash transfers builds on the Sen’s analysis of ‘entitlement to food’, where he argues that restoring access to food by improving demand is a more effective and sustainable response to food insecurity than food aid. Cash-based schemes offer a greater choice of use of the transfer and can allow a greater diversity of food choice. It has been proven that dietary diversity is positively associated with the key pillars of food security. Thus, dietary diversity is considered as a measure of household’s capacity to access a variety of food groups. Studies of dietary diversity among Ethiopian rural households are somewhat rare and there is still a dearth of evidence on the impact of PSNP on household dietary diversity. In this paper, we examine the impact of the Ethiopia’s PSNP on household dietary diversity and child nutrition using panel household surveys. We employed different methodologies for identification. We exploit the exogenous increase in kebeles’ PSNP budget to identify the effect of the change in the amount of money households received in transfers between 2012 and 2014 on the change in dietary diversity. We use three different approaches to identify this effect: two-stage least squares, reduced form IV, and generalized propensity score matching using a continuous treatment. The results indicate the increase in PSNP transfers between 2012 and 2014 had no effect on household dietary diversity. Estimates for different household dietary indicators reveal that the effect of the change in the cash transfer received by the household is statistically and economically insignificant. This finding is robust to different identification strategies and the inclusion of control variables that determine eligibility to become a PSNP beneficiary. To identify the effect of PSNP participation on children height-for-age and stunting we use a difference-in-difference approach. We use children between 2 and 5 in 2012 as a baseline because by then they have achieved long-term failure to grow. The treatment group comprises children ages 2 to 5 in 2014 in PSNP participant households. While changes in height-for-age take time, two years of additional transfers among children who were not born or under the age of 2-3 in 2012 have the potential to make a considerable impact on reducing the prevalence of stunting. The results indicate that participation in PSNP had no effect on child nutrition measured as height-for-age or probability of beings stunted, suggesting that PSNP should be designed in a more nutrition-sensitive way.

Keywords: continuous treatment, dietary diversity, impact, nutrition security

Procedia PDF Downloads 313