Search results for: phase diagram of Pd-rich PdX (X=Ru
181 Maternal and Neonatal Outcome Analysis in Preterm Abdominal Delivery Underwent Umbilical Cord Milking Compared to Early Cord Clamping
Authors: Herlangga Pramaditya, Agus Sulistyono, Risa Etika, Budiono Budiono, Alvin Saputra
Abstract:
Preterm birth and anemia of prematurity are the most common cause of morbidity and mortality in neonates, and anemia of the preterm neonates has become a major issue. The timing of umbilical cord clamping after a baby is born determines the amount of blood transferred from the placenta to fetus, Delayed Cord Clamping (DCC) has proven to prevent anemia in the neonates but it is constrained concern regarding the delayed in neonatal resuscitation. Umbilical Cord Milking (UCM) could be an alternative method for clamping the umbilical cord due to the active blood transfer from the placenta to the fetus. The aim of this study was to analyze the difference between maternal and neonatal outcome in preterm abdominal delivery who underwent UCM compared to ECC. This was an experimental study with randomized post-test only control design. Analyzed maternal and neonatal outcomes, significant P values (P <0.05). Statistical comparison was carried out using Paired Samples t-test (α two tailed 0,05). The result was the mean of preoperative mother’s hemoglobin in UCM group compared to ECC (10,9 + 0,9 g/dL vs 10,4 + 0,9 g/dL) and postoperative (11,1 + 1,1 g/dL vs 10,5 + 0,7 g/dL), the delta was (0,2 + 0,7 vs 0,1 + 0,6.). It showed no significant difference (P=0,395 vs 0,627). The mean of 3rd phase labor duration in UCM group vs ECC was (20,5 + 3,5 second vs 21,1 + 3,3 second), showed insignificant difference (P=0,634). The amount of bleeding after delivery in UCM group compared to ECC has the median of 190 cc (100-280cc) vs 210 cc (150-330 cc) showed insignificant difference (P=0,083) so the incidence of post-partum bleeding was not found. The mean of the neonates hemoglobin, hematocrit and erythrocytes of UCM group compared to ECC was (19,3 + 0,7 vs 15,9 + 0,8 g/dl), (57,1 + 3,6 % vs 47,2 + 2,8 %), and (5,4 + 0,4 g/dl vs 4,5 + 0,3 g/dl) showed significant difference (P<0,0001). There was no baby in UCM group received blood transfusion and one baby in the control ECC group received blood transfusion was found. Umbilical Cord Milking has shown to increase the baby’s blood component such as hemoglobin, hematocrit, and erythrocytes 6 hours after birth as well as lowering the incidence of blood transfusions. Maternal and neonatal morbidity were not found. Umbilical Cord Milking was the act of clamping the umbilical cord that was more beneficial to the baby and no adverse or negative effects on the mother.Keywords: umbilical cord milking, early cord clamping, maternal and neonatal outcome, preterm, abdominal delivery
Procedia PDF Downloads 241180 Effects of Vegetable Oils Supplementation on in Vitro Rumen Fermentation and Methane Production in Buffaloes
Authors: Avijit Dey, Shyam S. Paul, Satbir S. Dahiya, Balbir S. Punia, Luciano A. Gonzalez
Abstract:
Methane emitted from ruminant livestock not only reduces the efficiency of feed energy utilization but also contributes to global warming. Vegetable oils, a source of poly unsaturated fatty acids, have potential to reduce methane production and increase conjugated linoleic acid in the rumen. However, characteristics of oils, level of inclusion and composition of basal diet influences their efficacy. Therefore, this study was aimed to investigate the effects of sunflower (SFL) and cottonseed (CSL) oils on methanogenesis, volatile fatty acids composition and feed fermentation pattern by in vitro gas production (IVGP) test. Four concentrations (0, 0.1, 0.2 and 0.4ml /30ml buffered rumen fluid) of each oil were used. Fresh rumen fluid was collected before morning feeding from two rumen cannulated buffalo steers fed a mixed ration. In vitro incubation was carried out with sorghum hay (200 ± 5 mg) as substrate in 100 ml calibrated glass syringes following standard IVGP protocol. After 24h incubation, gas production was recorded by displacement of piston. Methane in the gas phase and volatile fatty acids in the fermentation medium were estimated by gas chromatography. Addition of oils resulted in increase (p<0.05) in total gas production and decrease (p<0.05) in methane production, irrespective of type and concentration. Although the increase in gas production was similar, methane production (ml/g DM) and its concentration (%) in head space gas was lower (p< 0.01) in CSL than in SFL at corresponding doses. Linear decrease (p<0.001) in degradability of DM was evident with increasing doses of oils (0.2ml onwards). However, these effects were more pronounced with SFL. Acetate production tended to decrease but propionate and butyrate production increased (p<0.05) with addition of oils, irrespective of type and doses. The ratio of acetate to propionate was reduced (p<0.01) with addition of oils but no difference between the oils was noted. It is concluded that both the oils can reduce methane production. However, feed degradability was also affected with higher doses. Cotton seed oil in small dose (0.1ml/30 ml buffered rumen fluid) exerted greater inhibitory effects on methane production without impeding dry matter degradability. Further in vivo studies need to be carried out for their practical application in animal ration.Keywords: buffalo, methanogenesis, rumen fermentation, vegetable oils
Procedia PDF Downloads 406179 An Experimental Investigation of Chemical Enhanced Oil Recovery (Ceor) for Fractured Carbonate Reservoirs, Case Study: Kais Formation on Wakamuk Field
Authors: Jackson Andreas Theo Pola, Leksono Mucharam, Hari Oetomo, Budi Susanto, Wisnu Nugraha
Abstract:
About half of the world oil reserves are located in carbonate reservoirs, where 65% of the total carbonate reservoirs are oil wet and 12% intermediate wet [1]. Oil recovery in oil wet or mixed wet carbonate reservoirs can be increased by dissolving surfactant to injected water to change the rock wettability from oil wet to more water wet. The Wakamuk Field operated by PetroChina International (Bermuda) Ltd. and PT. Pertamina EP in Papua, produces from main reservoir of Miocene Kais Limestone. First production commenced on August, 2004 and the peak field production of 1456 BOPD occurred in August, 2010. It was found that is a complex reservoir system and until 2014 cumulative oil production was 2.07 MMBO, less than 9% of OOIP. This performance is indicative of presence of secondary porosity, other than matrix porosity which is of low average porosity 13% and permeability less than 7 mD. Implementing chemical EOR in this case is the best way to increase oil production. However, the selected chemical must be able to lower the interfacial tension (IFT), reduce oil viscosity, and alter the wettability; thus a special chemical treatment named SeMAR has been proposed. Numerous laboratory tests such as phase behavior test, core compatibility test, mixture viscosity, contact angle measurement, IFT, imbibitions test and core flooding were conducted on Wakamuk field samples. Based on the spontaneous imbibitions results for Wakamuk field core, formulation of SeMAR with compositional S12A gave oil recovery 43.94% at 1wt% concentration and maximum percentage of oil recovery 87.3% at 3wt% concentration respectively. In addition, the results for first scenario of core flooding test gave oil recovery 60.32% at 1 wt% concentration S12A and the second scenario gave 96.78% of oil recovery at concentration 3 wt% respectively. The soaking time of chemicals has a significant effect on the recovery and higher chemical concentrations affect larger areas for wettability and therefore, higher oil recovery. The chemical that gives best overall results from laboratory tests study will also be a consideration for Huff and Puff injections trial (pilot project) for increasing oil recovery from Wakamuk FieldKeywords: Wakamuk field, chemical treatment, oil recovery, viscosity
Procedia PDF Downloads 693178 Inhibition Theory: The Development of Subjective Happiness and Life Satisfaction after Experiencing Severe, Traumatic Life Events (Paraplegia)
Authors: Tanja Ecken, Laura Fricke, Anika Steger, Maren M. Michaelsen, Tobias Esch
Abstract:
Studies and applied experiences evidence severe and traumatic accidents to not only require physical rehabilitation and recovery but also to necessitate a psychological adaption and reorganization to the changed living conditions. Neurobiological models underpinning the experience of happiness and satisfaction postulate life shocks to potentially enhance the experience of happiness and life satisfaction, i.e., posttraumatic growth (PTG). This present study aims to provide an in-depth understanding of the underlying psychological processes of PTG and to outline its consequences on subjective happiness and life satisfaction. To explore the aforementioned, Esch’s (2022) ABC Model was used as guidance for the development of a questionnaire assessing changes in happiness and life satisfaction and for a schematic model postulating the development of PTG in the context of paraplegia. Two-stage qualitative interview procedures explored participants’ experiences of paraplegia. Specifically, narrative, semi-structured interviews (N=28) focused on the time before and after the accident, the availability of supportive resources, and potential changes in the perception of happiness and life satisfaction. Qualitative analysis (Grounded Theory) indicated an initial phase of reorganization was followed by a gradual psychological adaption to novel, albeit reduced, opportunities in life. Participants reportedly experienced a ‘compelled’ slowing down and elements of mindfulness, subsequently instilling a sense of gratitude and joy in relation to life’s presumed trivialities. Despite physical limitations and difficulties, participants reported an enhanced ability to relate to oneself and others and a reduction of perceived every day nuisances. Concluding, PTG can be experienced in response to severe, traumatic life events and has the potential to enrich the lives of affected persons in numerous, unexpected and yet challenging ways. PTG appears to be spectrum comprised of an interplay of internal and external resources underpinned by neurobiological processes. Participants experienced PTG irrelevant of age, gender, marital status, income or level of education.Keywords: inhibition theory, posttraumatic growth, trauma, stress, life satisfaction, subjective happiness, traumatic life events, paraplegia
Procedia PDF Downloads 86177 Inappropriate Prescribing Defined by START and STOPP Criteria and Its Association with Adverse Drug Events among Older Hospitalized Patients
Authors: Mohd Taufiq bin Azmy, Yahaya Hassan, Shubashini Gnanasan, Loganathan Fahrni
Abstract:
Inappropriate prescribing in older patients has been associated with resource utilization and adverse drug events (ADE) such as hospitalization, morbidity and mortality. Globally, there is a lack of published data on ADE induced by inappropriate prescribing. Our study is specific to an older population and is aimed at identifying risk factors for ADE and to develop a model that will link ADE to inappropriate prescribing. The design of the study was prospective whereby computerized medical records of 302 hospitalized elderly aged 65 years and above in 3 public hospitals in Malaysia (Hospital Serdang, Hospital Selayang and Hospital Sungai Buloh) were studied over a 7 month period from September 2013 until March 2014. Potentially inappropriate medications and potential prescribing omissions were determined using the published and validated START-STOPP criteria. Patients who had at least one inappropriate medication were included in Phase II of the study where ADE were identified by local expert consensus panel based on the published and validated Naranjo ADR probability scale. The panel also assessed whether ADE were causal or contributory to current hospitalization. The association between inappropriate prescribing and ADE (hospitalization, mortality and adverse drug reactions) was determined by identifying whether or not the former was causal or contributory to the latter. Rate of ADE avoidability was also determined. Our findings revealed that the prevalence of potential inappropriate prescribing was 58.6%. A total of ADEs were detected in 31 of 105 patients (29.5%) when STOPP criteria were used to identify potentially inappropriate medication; All of the 31 ADE (100%) were considered causal or contributory to admission. Of the 31 ADEs, 28 (90.3%) were considered avoidable or potentially avoidable. After adjusting for age, sex, comorbidity, dementia, baseline activities of daily living function, and number of medications, the likelihood of a serious avoidable ADE increased significantly when a potentially inappropriate medication was prescribed (odds ratio, 11.18; 95% confidence interval [CI], 5.014 - 24.93; p < .001). The medications identified by STOPP criteria, are significantly associated with avoidable ADE in older people that cause or contribute to urgent hospitalization but contributed less towards morbidity and mortality. Findings of the study underscore the importance of preventing inappropriate prescribing.Keywords: adverse drug events, appropriate prescribing, health services research
Procedia PDF Downloads 399176 Early Age Behavior of Wind Turbine Gravity Foundations
Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet
Abstract:
The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines
Procedia PDF Downloads 175175 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure
Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin
Abstract:
Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.Keywords: potassium, sequential extraction process, clay mineral, TXM
Procedia PDF Downloads 290174 Simulation of Focusing of Diamagnetic Particles in Ferrofluid Microflows with a Single Set of Overhead Permanent Magnets
Authors: Shuang Chen, Zongqian Shi, Jiajia Sun, Mingjia Li
Abstract:
Microfluidics is a technology that small amounts of fluids are manipulated using channels with dimensions of tens to hundreds of micrometers. At present, this significant technology is required for several applications in some fields, including disease diagnostics, genetic engineering, and environmental monitoring, etc. Among these fields, manipulation of microparticles and cells in microfluidic device, especially separation, have aroused general concern. In magnetic field, the separation methods include positive and negative magnetophoresis. By comparison, negative magnetophoresis is a label-free technology. It has many advantages, e.g., easy operation, low cost, and simple design. Before the separation of particles or cells, focusing them into a single tight stream is usually a necessary upstream operation. In this work, the focusing of diamagnetic particles in ferrofluid microflows with a single set of overhead permanent magnets is investigated numerically. The geometric model of the simulation is based on the configuration of previous experiments. The straight microchannel is 24mm long and has a rectangular cross-section of 100μm in width and 50μm in depth. The spherical diamagnetic particles of 10μm in diameter are suspended into ferrofluid. The initial concentration of the ferrofluid c₀ is 0.096%, and the flow rate of the ferrofluid is 1.8mL/h. The magnetic field is induced by five identical rectangular neodymium−iron− boron permanent magnets (1/8 × 1/8 × 1/8 in.), and it is calculated by equivalent charge source (ECS) method. The flow of the ferrofluid is governed by the Navier–Stokes equations. The trajectories of particles are solved by the discrete phase model (DPM) in the ANSYS FLUENT program. The positions of diamagnetic particles are recorded by transient simulation. Compared with the results of the mentioned experiments, our simulation shows consistent results that diamagnetic particles are gradually focused in ferrofluid under magnetic field. Besides, the diamagnetic particle focusing is studied by varying the flow rate of the ferrofluid. It is in agreement with the experiment that the diamagnetic particle focusing is better with the increase of the flow rate. Furthermore, it is investigated that the diamagnetic particle focusing is affected by other factors, e.g., the width and depth of the microchannel, the concentration of the ferrofluid and the diameter of diamagnetic particles.Keywords: diamagnetic particle, focusing, microfluidics, permanent magnet
Procedia PDF Downloads 130173 Trajectory Tracking of Fixed-Wing Unmanned Aerial Vehicle Using Fuzzy-Based Sliding Mode Controller
Authors: Feleke Tsegaye
Abstract:
The work in this thesis mainly focuses on trajectory tracking of fixed wing unmanned aerial vehicle (FWUAV) by using fuzzy based sliding mode controller(FSMC) for surveillance applications. Unmanned Aerial Vehicles (UAVs) are general-purpose aircraft built to fly autonomously. This technology is applied in a variety of sectors, including the military, to improve defense, surveillance, and logistics. The model of FWUAV is complex due to its high non-linearity and coupling effect. In this thesis, input decoupling is done through extracting the dominant inputs during the design of the controller and considering the remaining inputs as uncertainty. The proper and steady flight maneuvering of UAVs under uncertain and unstable circumstances is the most critical problem for researchers studying UAVs. A FSMC technique was suggested to tackle the complexity of FWUAV systems. The trajectory tracking control algorithm primarily uses the sliding-mode (SM) variable structure control method to address the system’s control issue. In the SM control, a fuzzy logic control(FLC) algorithm is utilized in place of the discontinuous phase of the SM controller to reduce the chattering impact. In the reaching and sliding stages of SM control, Lyapunov theory is used to assure finite-time convergence. A comparison between the conventional SM controller and the suggested controller is done in relation to the chattering effect as well as tracking performance. It is evident that the chattering is effectively reduced, the suggested controller provides a quick response with a minimum steady-state error, and the controller is robust in the face of unknown disturbances. The designed control strategy is simulated with the nonlinear model of FWUAV using the MATLAB® / Simulink® environments. The simulation result shows the suggested controller operates effectively, maintains an aircraft’s stability, and will hold the aircraft’s targeted flight path despite the presence of uncertainty and disturbances.Keywords: fixed-wing UAVs, sliding mode controller, fuzzy logic controller, chattering, coupling effect, surveillance, finite-time convergence, Lyapunov theory, flight path
Procedia PDF Downloads 57172 Being Authentic is the New “Pieces”: A Mixed Methods Study on Authenticity among African Christian Millennials
Authors: Victor Counted
Abstract:
Staying true to self is complicated. In most cases, we might not fully come to terms with this realities. Just like any journey, a self-discovery experience with the ‘self’, is like a rollercoaster ride. The researcher attempts to engage the reader in an empirical study on authenticity tendencies of African Christian Millennials. Hence, attempting the all-important question: What does it actually mean to be true to self for the African youth? A comprehensive, yet an unfinished business that applies the authenticity theory in its exploratory navigations to uncover the “lived world” of the participants who were part of this study. Using a mixed methods approach, the researcher will exhaustively give account to the authenticity tendencies and experiences of the respondents in the study by providing the reader with a unique narrative for understanding what it means to be true to oneself in Africa. At the quantitative study, the participants recorded higher scores on the Authenticity Scale (AS) authentic living, while showing a significant correlation within the subscales. Hypotheses were tested at the quantitative phase, which statistically supported gender and church affiliation as possible predictors for the authenticity orientations of the participants, while being a Christian native and race/ethnicity were not impact factors statistically. The results helped the researcher to develop the objectives behind the qualitative study, where only fifteen AS-authentic living participants were interviewed to understand why they scored high on authentic living, in order to understand what it means to be authentic. The hallmark of the qualitative case study exploration was the common coping mechanism of splitting adopted by the respondents to deal with their self-crisis as they tried to remain authentic to self, whilst self-regulating and self-investing the self to discover ‘self’. Specifically, the researcher observed the concurrent utilization of some kind of the religious-self by the respondents to regulate their self crisis, as they relate with self fragmenting through different splitting stages in hope for some kind of redemption. It was an explanation that led to the conclusion that being authentic is the new pieces. Authenticity is in fragments. This proposition led the researcher to introduce a hermeneutical support-system that will enable future researchers engage more critically and responsibly with their “living human documents” in order to inspire timely solutions that resolve the concerns of authenticity and wellbeing among Millennials in Africa.Keywords: authenticity, self, identity, self-fragmentation, weak self integration, postmodern self, splitting
Procedia PDF Downloads 521171 Reliability of Dry Tissues Sampled from Exhumed Bodies in DNA Analysis
Authors: V. Agostini, S. Gino, S. Inturri, A. Piccinini
Abstract:
In cases of corpse identification or parental testing performed on exhumed alleged dead father, usually, we seek and acquire organic samples as bones and/or bone fragments, teeth, nails and muscle’s fragments. The DNA analysis of these cadaveric matrices usually leads to identifying success, but it often happens that the results of the typing are not satisfactory with highly degraded, partial or even non-interpretable genetic profiles. To aggravate the interpretative panorama deriving from the analysis of such 'classical' organic matrices, we must add a long and laborious treatment of the sample that starts from the mechanical fragmentation up to the protracted decalcification phase. These steps greatly increase the chance of sample contamination. In the present work, instead, we want to report the use of 'unusual' cadaveric matrices, demonstrating that their forensic genetics analysis can lead to better results in less time and with lower costs of reagents. We report six case reports, result of on-field experience, in which eyeswabs and cartilage were sampled and analyzed, allowing to obtain clear single genetic profiles, useful for identification purposes. In all cases we used the standard DNA tissue extraction protocols (as reported on the user manuals of the manufacturers such as QIAGEN or Invitrogen- Thermo Fisher Scientific), thus bypassing the long and difficult phases of mechanical fragmentation and decalcification of bones' samples. PCR was carried out using PowerPlex® Fusion System kit (Promega), and capillary electrophoresis was carried out on an ABI PRISM® 310 Genetic Analyzer (Applied Biosystems®), with GeneMapper ID v3.2.1 (Applied Biosystems®) software. The software Familias (version 3.1.3) was employed for kinship analysis. The genetic results achieved have proved to be much better than the analysis of bones or nails, both from the qualitative and quantitative point of view and from the point of view of costs and timing. This way, by using the standard procedure of DNA extraction from tissue, it is possible to obtain, in a shorter time and with maximum efficiency, an excellent genetic profile, which proves to be useful and can be easily decoded for later paternity tests and/or identification of human remains.Keywords: DNA, eye swabs and cartilage, identification human remains, paternity testing
Procedia PDF Downloads 109170 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator
Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya
Abstract:
Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.Keywords: DTL, focusing, PMQ, proton, rate earth magnets
Procedia PDF Downloads 472169 Nanoparticles-Protein Hybrid-Based Magnetic Liposome
Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek
Abstract:
Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues1. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics2. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI)3. The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology4. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III)/Iron(II) chloride with the leaf extract of Dhatura Inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.Keywords: nanoparticles-protein hybrid, magnetic liposome, medical, pharmaceutical science
Procedia PDF Downloads 249168 Wettability of Superhydrophobic Polymer Layers Filled with Hydrophobized Silica on Glass
Authors: Diana Rymuszka, Konrad Terpiłowski, Lucyna Hołysz, Elena Goncharuk, Iryna Sulym
Abstract:
Superhydrophobic surfaces exhibit extremely high water repellency. The commonly accepted basic criterion for such surfaces is a water contact angle larger than 150°, low contact angle hysteresis and low sliding angle. These surfaces are of special interest, because properties such as anti-sticking, anti-contamination and self-cleaning are expected. These properties are attractive for many applications such as anti-sticking of snow for antennas and windows, anti-biofouling paints for boats, waterproof clothing, self-cleaning windshields for automobiles, dust-free coatings or metal refining. The various methods for the preparation of superhydrophobic surfaces since last two decades have been reported, such as phase separation, electrochemical deposition, template method, plasma method, chemical vapor deposition, wet chemical reaction, sol-gel processing, lithography and so on. The aim of the study was to investigate the influence of modified colloidal silica, used as a filler, on the hydrophobicity of the polymer film deposited on the glass support activated with plasma. On prepared surfaces water advancing (ӨA) and receding (ӨR) contact angles were measured and then their total apparent surface free energy was determined using the contact angle hysteresis approach (CAH). The structures of deposited films were observed with the help of an optical microscope. Topographies of selected films were also determined using an optical profilometer. It was found that plasma treatment influence glass surface wetting and energetic properties that is observed in higher adhesion between polymer/filler film and glass support. Using the colloidal silica particles as a filler for the polymer thin film deposited on the glass support, it is possible to produce strongly adhering layers of superhydrophobic properties. The best superhydrophobic properties were obtained for surfaces of the film glass/polimer + modified silica covered in 89 and 100%. The advancing contact angle measured on these surfaces amounts above 150° that leads to under 2 mJ/m2 value of the apparent surface free energy. Such films may have many practical applications, among others, as dust-free coatings or anticorrosion protection.Keywords: contact angle, plasma, superhydrophobic, surface free energy
Procedia PDF Downloads 481167 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method
Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer
Abstract:
This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper
Procedia PDF Downloads 344166 Process Performance and Nitrogen Removal Kinetics in Anammox Hybrid Reactor
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a promising and cost effective alternative to conventional treatment systems that facilitates direct oxidation of ammonium nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of any external carbon sources. The present study investigates the process kinetics of laboratory scale anammox hybrid reactor (AHR) which combines the dual advantages of attached and suspended growth. The performance & behaviour of AHR was studied under varying hydraulic retention time (HRTs) and nitrogen loading rate (NLRs). The experimental unit consisted of 4 numbers of 5L capacity anammox hybrid reactor inoculated with mixed seed culture containing anoxic and activated sludge. Pseudo steady state (PSS) ammonium and nitrite removal efficiencies of 90.6% and 95.6%, respectively, were achieved during acclimation phase. After establishment of PSS, the performance of AHR was monitored at seven different HRTs of 3.0, 2.5, 2.0, 1.5, 1.0, 0.5 and 0.25 d with increasing NLR from 0.4 to 4.8 kg N/m3d. The results showed that with increase in NLR and decrease in HRT (3.0 to 0.25 d), AHR registered appreciable decline in nitrogen removal efficiency from 92.9% to 67.4 %, respectively. The HRT of 2.0 d was considered optimal to achieve substantial nitrogen removal of 89%, because on further decrease in HRT below 1.5 days, remarkable decline in the values of nitrogen removal efficiency were observed. Analysis of data indicated that attached growth system contributes an additional 15.4 % ammonium removal and reduced the sludge washout rate (additional 29% reduction). This enhanced performance may be attributed to 25% increase in sludge retention time due to the attached growth media. Three kinetic models, namely, first order, Monod and Modified Stover-Kincannon model were applied to assess the substrate removal kinetics of nitrogen removal in AHR. Validation of the models were carried out by comparing experimental set of data with the predicted values obtained from the respective models. For substrate removal kinetics, model validation revealed that Modified Stover-Kincannon is most precise (R2=0.943) and can be suitably applied to predict the kinetics of nitrogen removal in AHR. Lawrence and McCarty model described the kinetics of bacterial growth. The predicted value of yield coefficient and decay constant were in line with the experimentally observed values.Keywords: anammox, kinetics, modelling, nitrogen removal, sludge wash out rate, AHR
Procedia PDF Downloads 317165 Non-Canonical Beclin-1-Independent Autophagy and Apoptosis in Cell Death Induced by Rhus coriaria in Human Colon HT-29 Cancer Cells
Authors: Rabah Iratni, Husain El Hasasna, Khawlah Athamneh, Halima Al Sameri, Nehla Benhalilou, Asma Al Rashedi
Abstract:
Background: Cancer therapies have witnessed great advances in the recent past, however, cancer continues to be a leading cause of death, with colorectal cancer being the fourth cause of cancer-related deaths. Colorectal cancer affects both sexes equally with poor survival rate once it metastasizes. Phytochemicals, which are plant derived compounds, have been on a steady rise as anti-cancer drugs due to the accumulation of evidences that support their potential. Here, we investigated the anticancer effect of Rhus coriaria on colon cancer cells. Material and Method: Human colon cancer HT-29 cell line was used. Protein expression and protein phosphorylation were examined using Western blotting. Transcription activity was measure using Quantitative RT-PCR. Human tumoral clonogenic assay was used to assess cell survival. Senescence was assessed by the senescence-associated beta-galactosidase assay. Results: Rhus coriaria extract (RCE) was found to significantly inhibit the viability and colony growth of human HT-29 colon cancer cells. RCE induced senescence and cell cycle arrest at G1 phase. These changes were concomitant with upregulation of p21, p16, downregulation of cyclin D1, p27, c-myc and expression of Senescence-associated-β-Galactosidase activity. Moreover, RCE induced non-canonical beclin-1independent autophagy and subsequent apoptotic cell death through activation of activation caspase 8 and caspase 7. The blocking of autophagy by 3-methyladenine (3-MA) or chloroquine (CQ) reduced RCE-induced cell death. Further, RCE induced DNA damage, reduced mutant p53 protein level and downregulated phospho-AKT and phospho-mTOR, events that preceded autophagy. Mechanistically, we found that RCE inhibited the AKT and mTOR pathway, a regulator of autophagy, by promoting the proteasome-dependent degradation of both AKT and mTOR proteins. Conclusion: Our findings provide strong evidence that Rhus coriaria possesses strong anti-colon cancer activity through induction of senescence and autophagic cell death, making it a promising alternative or adjunct therapeutic candidate against colon cancer.Keywords: autophagy, proteasome degradation, senescence, mTOR, apoptosis, Beclin-1
Procedia PDF Downloads 262164 The Incidence of Obesity among Adult Women in Pekanbaru City, Indonesia, Related to High Fat Consumption, Stress Level, and Physical Activity
Authors: Yudia Mailani Putri, Martalena Purba, B. J. Istiti Kandarina
Abstract:
Background: Obesity has been recognized as a global health problem. Individuals classified as overweight and obese are increasing at an alarming rate. This condition is associated with psychological and physiological problems. as a person reaches adulthood, somatic growth ceases. At this stage, the human body has developed fully, to a stable state. As the capital of Riau Province in Indonesia, Pekanbaru is dominated by Malay ethnic population habitually consuming cholesterol-rich fatty foods as a daily menu, a trigger to the onset of obesity resulting in high prevalence of degenerative diseases. Research objectives: The aim of this study is elaborating the relationship between high-fat consumption pattern, stress level, physical activity and the incidence of obesity in adult women in Pekanbaru city. Research Methods: Among the combined research methods applied in this study, the first stage is quantitative observational, analytical cross-sectional research design with adult women aged 20-40 living in Pekanbaru city. The sample consists of 200 women with BMI≥25. Sample data is processed with univariate, bivariate (correlation and simple linear regression) and multivariate (multiple linear regression) analysis. The second phase is qualitative descriptive study purposive sampling by in-depth interviews. six participants withdrew from the study. Results: According to the results of the bivariate analysis, there are relationships between the incidence of obesity and the pattern of high fat foods consumption (energy intake (p≤0.000; r = 0.536), protein intake (p≤0.000; r=0.307), fat intake (p≤0.000; r=0.416), carbohydrate intake (p≤0.000; r=0.430), frequency of fatty food consumption (p≤0.000; r=0.506) and frequency of viscera foods consumption (p≤0.000; r=0.535). There is a relationship between physical activity and incidence of obesity (p≤0.000; r=-0.631). However, there is no relationship between the level of stress (p=0.741; r=0.019-) and the incidence of obesity. Physical activity is a predominant factor in the incidence of obesity in adult women in Pekanbaru city. Conclusion: There are relationships between high-fat food consumption pattern, physical activity and the incidence of obesity in Pekanbaru city whereas physical activity is a predominant factor in the occurrence of obesity, supported by the unchangeable pattern of high-fat foods consumption.Keywords: obesity, adult, high in fat, stress, physical activity, consumption pattern
Procedia PDF Downloads 234163 Characterisation of Human Attitudes in Software Requirements Elicitation
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana
Abstract:
It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering
Procedia PDF Downloads 263162 Creating an Enabling Learning Environment for Learners with Visual Impairments Inlesotho Rural Schools by Using Asset-Based Approaches
Authors: Mamochana, A. Ramatea, Fumane, P. Khanare
Abstract:
Enabling the learning environment is a significant and adaptive technique necessary to navigate learners’ educational challenges. However, research has indicated that quality provision of education in the environments that are enabling, especially to learners with visual impairments (LVIs, hereafter) in rural schools, remain an ongoing challenge globally. Hence, LVIs often have a lower level of academic performance as compared to their peers. To balance this gap and fulfill learners'fundamentalhuman rights¬ of receiving an equal quality education, appropriate measures and structures that make enabling learning environment a better place to learn must be better understood. This paper, therefore, intends to find possible means that rural schools of Lesotho can employ to make the learning environment for LVIs enabling. The present study aims to determine suitable assets that can be drawn to make the learning environment for LVIs enabling. The study is also informed by the transformative paradigm and situated within a qualitative research approach. Data were generated through focus group discussions with twelve teachers who were purposefully selected from two rural primary schools in Lesotho. The generated data were then analyzed thematically using Braun and Clarke's six-phase framework. The findings of the study indicated that participating teachers do have an understanding that rural schools boast of assets (existing and hidden) that have a positive influence in responding to the special educational needs of LVIs. However, the participants also admitted that although their schools boast of assets, they still experience limited knowledge about the use of the existing assets and thus, realized a need for improved collaboration, involvement of the existing assets, and enhancement of academic resources to make LVIs’ learning environment enabling. The findings of this study highlight the significance of the effective use of assets. Additionally, coincides with literature that shows recognizing and tapping into the existing assets enable learning for LVIs. In conclusion, the participants in the current study indicated that for LVIs’ learning environment to be enabling, there has to be sufficient use of the existing assets. The researchers, therefore, recommend that the appropriate use of assets is good, but may not be sufficient if the existing assets are not adequately managed. Hence,VILs experience a vicious cycle of vulnerability. It was thus, recommended that adequate use of assets and teachers' engagement as active assets should always be considered to make the learning environment a better place for LVIs to learan in the futureKeywords: assets, enabling learning environment, rural schools, learners with visual impairments
Procedia PDF Downloads 108161 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study
Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly
Abstract:
In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions
Procedia PDF Downloads 82160 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings
Authors: Chen Wang, Jared Evans, Yan Asmann
Abstract:
With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing
Procedia PDF Downloads 257159 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials
Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik
Abstract:
Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes
Procedia PDF Downloads 61158 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study
Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon
Abstract:
The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.Keywords: opicapone, levodopa, pharmacokinetics, off-time
Procedia PDF Downloads 62157 Utilization of a Telepresence Evaluation Tool for the Implementation of a Distant Education Program
Authors: Theresa Bacon-Baguley, Martina Reinhold
Abstract:
Introduction: Evaluation and analysis are the cornerstones of any successful program in higher education. When developing a program at a distant campus, it is essential that the process of evaluation and analysis be orchestrated in a timely manner with tools that can identify both the positive and negative components of distant education. We describe the utilization of a newly developed tool used to evaluate and analyze the successful expansion to a distant campus using Telepresence Technology. Like interactive television, Telepresence allows live interactive delivery but utilizes broadband cable. The tool developed is adaptable to any distant campus as the framework for the tool was derived from a systematic review of the literature. Methodology: Because Telepresence is a relatively new delivery system, the evaluation tool was developed based on a systematic review of literature in the area of distant education and ITV. The literature review identified four potential areas of concern: 1) technology, 2) confidence in the system, 3) faculty delivery of the content and, 4) resources at each site. Each of the four areas included multiple sub-components. Benchmark values were determined to be 80% or greater positive responses to each of the four areas and the individual sub-components. The tool was administered each semester during the didactic phase of the curriculum. Results: Data obtained identified site-specific issues (i.e., technology access, student engagement, laboratory access, and resources), as well as issues common at both sites (i.e., projection screen size). More specifically, students at the parent location did not have adequate access to printers or laboratory space, and students at the distant campus did not have adequate access to library resources. The evaluation tool identified that both sites requested larger screens for visualization of the faculty. The deficiencies were addressed by replacing printers, including additional orientation for students on library resources and increasing the screen size of the Telepresence system. When analyzed over time, the issues identified in the tool as deficiencies were resolved. Conclusions: Utilizing the tool allowed adjustments of the Telepresence delivery system in a timely manner resulting in successful implementation of an entire curriculum at a distant campus.Keywords: physician assistant, telepresence technology, distant education, assessment
Procedia PDF Downloads 124156 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves
Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis
Abstract:
Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities
Procedia PDF Downloads 265155 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 128154 The Creation of Calcium Phosphate Coating on Nitinol Substrate
Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova
Abstract:
NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.
Procedia PDF Downloads 72153 Didactic Games for the Development of Reading and Writing: Proeduca Program
Authors: Andreia Osti
Abstract:
The context experienced in the face of the COVID-19 pandemic substantially changed the way children communicate and the way literacy teaching was carried out. Officially, according to the Brazilian Institute of Geography and Statistics, children who should be literate were seriously impacted by the pandemic, and it was found that the number of illiterate children increased from 1.4 million, in 2019, to 2.4 million in 2021. In this context, this work presents partial results of an intervention project in which classroom monitoring of students in the literacy phase was carried out. Methodologically, pedagogical games were developed that work on specific reading and writing content, such as 1) games with direct regularities and; 2) Games with contextual regularities. The project involves the elaboration and production of games and their application by the classroom teacher. All work focused on literacy and improving understanding of grapheme and phoneme relationships among students, aiming to improve reading and writing comprehension levels. The project, still under development, is carried out in two schools and supports 60 students. The teachers participate in the research, as they apply the games produced at the university and monitor the children's learning process. The project is developed with financial support for research from FAPESP - in the public education improvement program – PROEDUCA. The initial results show that children are more involved in playful activities, that games provide better moments of interaction in the classroom and that they result in effective learning since they constitute a different way of approaching the content to be taught. It is noteworthy that the pedagogical games produced directly involve the teaching and learning processes of curricular components – in this case, reading and writing, which are basic components in elementary education and constitute teaching methodologies as specific and guided activities are planned in literacy methods. In this presentation, some of the materials developed will be shown, as well as the results of the assessments carried out with the students. In relation to the Sustainable Development objectives (SDGs) linked to this project, we have 4 – Quality Education, 10 – Reduction of inequalities. It is noteworthy that the research seeks to improve Public Education and promote the articulation between theory and practice in the educational context with a view to consolidating the tripod of teaching, research and university extension and promoting a humanized education.Keywords: didactic, teaching, games, learning, literacy
Procedia PDF Downloads 23152 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 229