Search results for: chattering function
694 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 31693 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices
Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen
Abstract:
Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.Keywords: tavi, tavr, fea, par, fem
Procedia PDF Downloads 440692 IL6/PI3K/mTOR/GFAP Molecular Pathway Role in COVID-19-Induced Neurodegenerative Autophagy, Impacts and Relatives
Authors: Mohammadjavad Sotoudeheian
Abstract:
COVID-19, which began in December 2019, uses the angiotensin-converting enzyme 2 (ACE2) receptor to enter and spread through the cells. ACE2 mRNA is present in almost every organ, including nasopharynx, lung, as well as the brain. Ports of entry of SARS-CoV-2 into the central nervous system (CNS) may include arterial circulation, while viremia is remarkable. However, it is imperious to develop neurological symptoms evaluation CSF analysis in patients with COVID-19, but theoretically, ACE2 receptors are expressed in cerebellar cells and may be a target for SARS-CoV-2 infection in the brain. Recent evidence agrees that SARS-CoV-2 can impact the brain through direct and indirect injury. Two biomarkers for CNS injury, glial fibrillary acidic protein (GFAP) and neurofilament light chain (NFL) detected in the plasma of patients with COVID-19. NFL, an axonal protein expressed in neurons, is related to axonal neurodegeneration, and GFAP is over-expressed in CNS inflammation. GFAP cytoplasmic accumulation causes Schwan cells to misfunction, so affects myelin generation, reduces neuroskeletal support over NfLs during CNS inflammation, and leads to axonal degeneration. Interleukin-6 (IL-6), which extensively over-express due to interleukin storm during COVID-19 inflammation, regulates gene expression, as well as GFAP through STAT molecular pathway. IL-6 also impresses the phosphoinositide 3-kinase (PI3K)/STAT/smads pathway. The PI3K/ protein kinase B (Akt) pathway is the main modulator upstream of the mammalian target of rapamycin (mTOR), and alterations in this pathway are common in neurodegenerative diseases. Most neurodegenerative diseases show a disruption of autophagic function and display an abnormal increase in protein aggregation that promotes cellular death. Therefore, induction of autophagy has been recommended as a rational approach to help neurons clear abnormal protein aggregates and survive. The mTOR is a major regulator of the autophagic process and is regulated by cellular stressors. The mTORC1 pathway and mTORC2, as complementary and important elements in mTORC1 signaling, have become relevant in the regulation of the autophagic process and cellular survival through the extracellular signal-regulated kinase (ERK) pathway.Keywords: mTORC1, COVID-19, PI3K, autophagy, neurodegeneration
Procedia PDF Downloads 86691 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler
Authors: Tejinder Singh Sidhu
Abstract:
Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.Keywords: hot corrosion, coating, HVOF, oxidation
Procedia PDF Downloads 85690 Socio-Spatial Transformations in Obsolete Port Regions: A Case for Istanbul-Karaköy District
Authors: Umut Tuğlu Karslı
Abstract:
While, port function had a major role during the antiquity and medieval times, it has started to lose its significance in 19th century. In many port cities, while heavy industrial functions and ports have been moved out of the former port districts, the resulting derelict spaces have been transformed to new waterfront quarters to accommodate commercial, tourism, cultural, residential and public uses. Primary aim of these operations is to revitalize abandoned spaces of historical potential and re-establish a relationship between the city and the coast. Karakoy Port, field of this study, located in the Bosphorus, was surrounded by the city centre in time due to the transformation of urban functions, and as a result it has lost its former significance. While Karakoy has 24 hours lively residential and commercial uses in old times; in early 1980s, became a district of mechanical, plumbing and electronic parts suppliers during the day and a place for homeless at night. Today, activities for revitalization of this region continue in two forms and scales. First of these activities is the "planned transformation projects," which also includes the most important one “Galataport project”, and the second one is "spontaneous transformation," which consists of individual interventions. Galataport project that based on the idea of arranging the area specifically for tourists was prepared in 2005 and became a topic of tremendous public debate. On the other hand, the "spontaneous transformation" that is observed in the Karakoy District starts in 2004 with the foundation of “Istanbul Modern Museum”. Istanbul Modern, the first contemporary arts museum of the city, allowed the cultural integration of old naval warehouses of the port to the daily life. Following this adaptive reuse intervention, the district started to accommodate numerous art galleries, studios, café-workshops and design stores. In this context, this paper briefly examines revitalization studies in obsolete port regions, analyzes the planned and ongoing socio-spatial transformations in the specific case of Karakoy under the subjects of "planned transformation projects" and "spontaneous transformation", and realizes a critical review of the sustainability of the proposals on how to reinstate the district in the active life of Istanbul.Keywords: port cities, socio-spatial transformation, urban regeneration, urban revitalization
Procedia PDF Downloads 459689 Development of an Integrated Reaction Design for the Enzymatic Production of Lactulose
Authors: Natan C. G. Silva, Carlos A. C. Girao Neto, Marcele M. S. Vasconcelos, Luciana R. B. Goncalves, Maria Valderez P. Rocha
Abstract:
Galactooligosaccharides (GOS) are sugars with prebiotic function that can be synthesized chemically or enzymatically, and this last one can be promoted by the action of β-galactosidases. In addition to favoring the transgalactosylation reaction to form GOS, these enzymes can also catalyze the hydrolysis of lactose. A highly studied type of GOS is lactulose because it presents therapeutic properties and is a health promoter. Among the different raw materials that can be used to produce lactulose, whey stands out as the main by-product of cheese manufacturing, and its discarded is harmful to the environment due to the residual lactose present. Therefore, its use is a promising alternative to solve this environmental problem. Thus, lactose from whey is hydrolyzed into glucose and galactose by β-galactosidases. However, in order to favor the transgalactosylation reaction, the medium must contain fructose, due this sugar reacts with galactose to produce lactulose. Then, the glucose-isomerase enzyme can be used for this purpose, since it promotes the isomerization of glucose into fructose. In this scenario, the aim of the present work was first to develop β-galactosidase biocatalysts of Kluyveromyces lactis and to apply it in the integrated reactions of hydrolysis, isomerization (with the glucose-isomerase from Streptomyces murinus) and transgalactosylation reaction, using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% glutaraldehyde was evaluated using different enzymatic loads (2, 5, 7, 10, and 12 mg/g). Subsequently, the hydrolysis and transgalactosylation reactions were studied and conducted at 50°C, 120 RPM for 20 minutes. In parallel, the isomerization of glucose into fructose was evaluated under conditions of 70°C, 750 RPM for 90 min. After, the integration of the three processes for the production of lactulose was investigated. Among the evaluated loads, 7 mg/g was chosen because the best activity of the derivative (44.3 U/g) was obtained, being this parameter determinant for the reaction stages. The other parameters of immobilization yield (87.58%) and recovered activity (46.47%) were also satisfactory compared to the other conditions. Regarding the integrated process, 94.96% of lactose was converted, achieving 37.56 g/L and 37.97 g/L of glucose and galactose, respectively. In the isomerization step, conversion of 38.40% of glucose was observed, obtaining a concentration of 12.47 g/L fructose. In the transgalactosylation reaction was produced 13.15 g/L lactulose after 5 min. However, in the integrated process, there was no formation of lactulose, but it was produced other GOS at the same time. The high galactose concentration in the medium probably favored the reaction of synthesis of these other GOS. Therefore, the integrated process proved feasible for possible production of prebiotics. In addition, this process can be economically viable due to the use of an industrial residue as a substrate, but it is necessary a more detailed investigation of the transgalactosilation reaction.Keywords: beta-galactosidase, glucose-isomerase, galactooligosaccharides, lactulose, whey
Procedia PDF Downloads 142688 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 106687 Concept of Using an Indicator to Describe the Quality of Fit of Clothing to the Body Using a 3D Scanner and CAD System
Authors: Monika Balach, Iwona Frydrych, Agnieszka Cichocka
Abstract:
The objective of this research is to develop an algorithm, taking into account material type and body type that will describe the fabric properties and quality of fit of a garment to the body. One of the objectives of this research is to develop a new algorithm to simulate cloth draping within CAD/CAM software. Existing virtual fitting does not accurately simulate fabric draping behaviour. Part of the research into virtual fitting will focus on the mechanical properties of fabrics. Material behaviour depends on many factors including fibre, yarn, manufacturing process, fabric weight, textile finish, etc. For this study, several different fabric types with very different mechanical properties will be selected and evaluated for all of the above fabric characteristics. These fabrics include woven thick cotton fabric which is stiff and non-bending, woven with elastic content, which is elastic and bends on the body. Within the virtual simulation, the following mechanical properties can be specified: shear, bending, weight, thickness, and friction. To help calculate these properties, the KES system (Kawabata) can be used. This system was originally developed to calculate the mechanical properties of fabric. In this research, the author will focus on three properties: bending, shear, and roughness. This study will consider current research using the KES system to understand and simulate fabric folding on the virtual body. Testing will help to determine which material properties have the largest impact on the fit of the garment. By developing an algorithm which factors in body type, material type, and clothing function, it will be possible to determine how a specific type of clothing made from a particular type of material will fit on a specific body shape and size. A fit indicator will display areas of stress on the garment such as shoulders, chest waist, hips. From this data, CAD/CAM software can be used to develop garments that fit with a very high degree of accuracy. This research, therefore, aims to provide an innovative solution for garment fitting which will aid in the manufacture of clothing. This research will help the clothing industry by cutting the cost of the clothing manufacturing process and also reduce the cost spent on fitting. The manufacturing process can be made more efficient by virtual fitting of the garment before the real clothing sample is made. Fitting software could be integrated into clothing retailer websites allowing customers to enter their biometric data and determine how the particular garment and material type would fit their body.Keywords: 3D scanning, fabric mechanical properties, quality of fit, virtual fitting
Procedia PDF Downloads 179686 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 75685 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring
Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove
Abstract:
Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor
Procedia PDF Downloads 75684 Learning Gains and Constraints Resulting from Haptic Sensory Feedback among Preschoolers' Engagement during Science Experimentation
Authors: Marios Papaevripidou, Yvoni Pavlou, Zacharias Zacharia
Abstract:
Embodied cognition and additional (touch) sensory channel theories indicate that physical manipulation is crucial to learning since it provides, among others, touch sensory input, which is needed for constructing knowledge. Given these theories, the use of Physical Manipulatives (PM) becomes a prerequisite for learning. On the other hand, empirical research on Virtual Manipulatives (VM) (e.g., simulations) learning has provided evidence showing that the use of PM, and thus haptic sensory input, is not always a prerequisite for learning. In order to investigate which means of experimentation, PM or VM, are required for enhancing student science learning at the kindergarten level, an empirical study was conducted that sought to investigate the impact of haptic feedback on the conceptual understanding of pre-school students (n=44, age mean=5,7) in three science domains: beam balance (D1), sinking/floating (D2) and springs (D3). The participants were equally divided in two groups according to the type of manipulatives used (PM: presence of haptic feedback, VM: absence of haptic feedback) during a semi-structured interview for each of the domains. All interviews followed the Predict-Observe-Explain (POE) strategy and consisted of three phases: initial evaluation, experimentation, final evaluation. The data collected through the interviews were analyzed qualitatively (open-coding for identifying students’ ideas in each domain) and quantitatively (use of non-parametric tests). Findings revealed that the haptic feedback enabled students to distinguish heavier to lighter objects when held in hands during experimentation. In D1 the haptic feedback did not differentiate PM and VM students' conceptual understanding of the function of the beam as a mean to compare the mass of objects. In D2 the haptic feedback appeared to have a negative impact on PM students’ learning. Feeling the weight of an object strengthen PM students’ misconception that heavier objects always sink, whereas the scientifically correct idea that the material of an object determines its sinking/floating behavior in the water was found to be significantly higher among the VM students than the PM ones. In D3 the PM students outperformed significantly the VM students with regard to the idea that the heavier an object is the more the spring will expand, indicating that the haptic input experienced by the PM students served as an advantage to their learning. These findings point to the fact that PMs, and thus touch sensory input, might not always be a requirement for science learning and that VMs could be considered, under certain circumstances, as a viable means for experimentation.Keywords: haptic feedback, physical and virtual manipulatives, pre-school science learning, science experimentation
Procedia PDF Downloads 139683 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study
Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu
Abstract:
Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm
Procedia PDF Downloads 138682 Window Seat: Examining Public Space, Politics, and Social Identity through Urban Public Transportation
Authors: Sabrina Howard
Abstract:
'Window Seat' uses public transportation as an entry point for understanding the relationship between public space, politics, and social identity construction. This project argues that by bringing people of different races, classes, and genders in 'contact' with one another, public transit operates as a site of exposure, as people consciously and unconsciously perform social identity within these spaces. These performances offer a form of freedom that we associate with being in urban spaces while simultaneously rendering certain racialized, gendered, and classed bodies vulnerable to violence. Furthermore, due to its exposing function, public transit operates as a site through which we, as urbanites and scholars, can read social injustice and reflect on the work that is necessary to become a truly democratic society. The major questions guiding this research are: How does using public transit as the entry point provide unique insights into the relationship between social identity, politics, and public space? What ideas do Americans hold about public space and how might these ideas reflect a liberal yearning for a more democratic society? To address these research questions, 'Window Seat' critically examines ethnographic data collected on public buses and trains in Los Angeles, California, and online news media. It analyzes these sources through literature in socio-cultural psychology, sociology, and political science. It investigates the 'everyday urban hero' narrative or popular news stories that feature an individual or group of people acting against discriminatory or 'Anti-American' behavior on public buses and trains. 'Window Seat' studies these narratives to assert that by circulating stories of civility in news media, United Statsians construct and maintain ideas of the 'liberal city,' which is characterized by ideals of freedom and democracy. Furthermore, for those involved, these moments create an opportunity to perform the role of the Good Samaritan, an identity that is wrapped up in liberal beliefs in diversity and inclusion. This research expands conversations in urban studies by making a case for the political significance of urban public space. It demonstrates how these sites serve as spaces through which liberal beliefs are circulated and upheld through identity performance.Keywords: social identity, public space, public transportation, liberalism
Procedia PDF Downloads 206681 Bioactivities and Phytochemical Studies of Acrocarpus fraxinifolius Bark Wight and Arn
Authors: H. M. El-Rafie, A. H. Abou Zeid, R. S. Mohammed, A. A. Sleem
Abstract:
Acrocarpus is a genus of flowering plants in the legume family Fabaceae which considered as a large and economically important family. This study aimed to investigate the phytoconstituents of the petroleum ether extract (PEE) of Acrocarpus fraxinofolius bark by Gas chromatography coupled with mass spectrometry (GC/MS) analysis of its fractions (fatty acid and unsaponifiable matter). Concerning this, identification of 52 compounds constituting 97.03 % of the total composition of the unsaponifiable matter fraction. Cycloeucalenol was found to be the major compound representing 32.52% followed by 4a, 14a-dimethyl-A8~24(28)-ergostadien (26.50%) and ß-sitosterol(13.74%), furthermore Gas liquid chromatography (GLC) analysis of the sterol fraction revealed the identification of cholesterol (7.22 %), campesterol (13.30 %), stigmasterol (10.00 %) and β - sitosterol (69.48 %). Meanwhile, the identification of 33 fatty acids representing 90.71% of the total fatty acid constituents. Methyl-9,12-octadecadienoate (40.39%) followed by methyl hexadecanoate (23.64%) were found to be the major compounds. On the other hand, column chromatography and Thin layer chromatography (TLC) fractionation of PEE separate the triterpenoid: 21β-hydroxylup-20(29)-en-3-one and β- amyrin which were structurally identified by spectroscopic analysis (NMR, MS and IR). PEE has been biologically evaluated for 1: management of diabetes in alloxan induced diabetic rats 2: cytotoxic activity against four human tumor cell lines (Cervix carcinoma cell line[HELA], Breast carcinoma cell line [MCF7], Liver carcinoma cell line[HEPG2] and Colon carcinoma cell line[HCT-116] 3: hepatoprotective activity against CCl4-induced hepatotoxicity in rats and the activity was studied by assaying the serum marker enzymes like AST, ALT, and ALP. Concerning this, the anti-diabetic activity exhibited by 100mg of PEE extract was 74.38% relative to metformin (100% potency). It also showed a significant anti-proliferative activity against MCF-7 (IC50= 2.35µg), Hela(IC50=3.85µg) and HEPG-2 (IC50= 9.54µg) compared with Doxorubicin as reference drug. The hepatoprotective activity was evidenced by significant decrease in liver function enzymes, i.e. AST, ALT and ALP by (29.18%, 28.26%, and 34.11%, respectively using silymarin as the reference drug, compared to their concentration levels in an untreated group with liver damage induced by CCl₄. This study was performed for the first time on the bark of this species.Keywords: Acrocarpus fraxinofolius, antidiabetic, cytotoxic, hepatoprotective
Procedia PDF Downloads 196680 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia
Authors: Agus Hariyadi, Hiroatsu Fukuda
Abstract:
In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.Keywords: façade, natural light, blind, energy
Procedia PDF Downloads 346679 The Reflexive Interaction in Group Formal Practices: The Question of Criteria and Instruments for the Character-Skills Evaluation
Authors: Sara Nosari
Abstract:
In the research field on adult education, the learning development project followed different itineraries: recently it has promoted adult transformation by practices focused on the reflexive oriented interaction. This perspective, that connects life stories and life-based methods, characterizes a transformative space between formal and informal education. Within this framework, in the Nursing Degree Courses of Turin University, it has been discussed and realized a formal reflexive path on the care work professional identity through group practices. This path compared the future care professionals with possible experiences staged by texts used with the function of a pre-tests: these texts, setting up real or believable professional situations, had the task to start a reflection on the different 'elements' of care work professional life (relationship, educational character of relationship, relationship between different care roles; or even human identity, aims and ultimate aim of care, …). The learning transformative aspect of this kind of experience-test is that it is impossible to anticipate the process or the conclusion of reflexion because they depend on two main conditions: the personal sensitivity and the specific situation. The narrated experience is not a device, it does not include any tricks to understand the answering advance; the text is not aimed at deepening the knowledge, but at being an active and creative force which takes the group to compare with problematic figures. In fact, the experience-text does not have the purpose to explain but to problematize: it creates a space of suspension to live for questioning, for discussing, for researching, for deciding. It creates a space 'open' and 'in connection' where each one, in comparing with others, has the possibility to build his/her position. In this space, everyone has to possibility to expose his/her own argumentations and to be aware of the others emerged points of view, aiming to research and find the own personal position. However, to define his/her position, it is necessary to learn to exercise character skills (conscientiousness, motivation, creativity, critical thinking, …): if these not-cognitive skills have an undisputed evidence, less evident is how to value them. The paper will reflect on the epistemological limits and possibility to 'measure' character skills, suggesting some evaluation criteria.Keywords: transformative learning, educational role, formal/informal education, character-skills
Procedia PDF Downloads 195678 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 365677 Using Genre Analysis to Teach Contract Negotiation Discourse Practices
Authors: Anthony Townley
Abstract:
Contract negotiation is fundamental to commercial law practice. For this study, genre and discourse analytical methodology was used to examine the legal negotiation of a Merger & Acquisition (M&A) deal undertaken by legal and business professionals in English across different jurisdictions in Europe. While some of the most delicate negotiations involved in this process were carried on face-to-face or over the telephone, these were generally progressed more systematically – and on the record – in the form of emails, email attachments, and as comments and amendments recorded in successive ‘marked-up’ versions of the contracts under negotiation. This large corpus of textual data was originally obtained by the author, in 2012, for the purpose of doctoral research. For this study, the analysis is particularly concerned with the use of emails and covering letters to exchange legal advice about the negotiations. These two genres help to stabilize and progress the negotiation process and account for negotiation activities. Swalesian analysis of functional Moves and Steps was able to identify structural similarities and differences between these text types and to identify certain salient discursive features within them. The analytical findings also indicate how particular linguistic strategies are more appropriately and more effectively associated with one legal genre rather than another. The concept of intertextuality is an important dimension of contract negotiation discourse and this study also examined how the discursive relationships between the different texts influence the way that texts are constructed. In terms of materials development, the research findings can contribute to more authentic English for Legal & Business Purposes pedagogies for students and novice lawyers and business professionals. The findings can first be used to design discursive maps that provide learners with a coherent account of the intertextual nature of the contract negotiation process. These discursive maps can then function as a framework in which to present detailed findings about the textual and structural features of the text types by applying the Swalesian genre analysis. Based on this acquired knowledge of the textual nature of contract negotiation, the authentic discourse materials can then be used to provide learners with practical opportunities to role-play negotiation activities and experience professional ways of thinking and using language in preparation for the written discourse challenges they will face in this important area of legal and business practice.Keywords: English for legal and business purposes, discourse analysis, genre analysis, intertextuality, pedagogical materials
Procedia PDF Downloads 150676 Thoughts Regarding Interprofessional Work between Nurses and Speech-Language-Hearing Therapists in Cancer Rehabilitation: An Approach for Dysphagia
Authors: Akemi Nasu, Keiko Matsumoto
Abstract:
Rehabilitation for cancer requires setting up individual goals for each patient and an approach that properly fits the stage of cancer when putting into practice. In order to cope with the daily changes in the patients' condition, the establishment of a good cooperative relationship between the nurses and the physiotherapists, occupational therapists, and speech-language-hearing therapists (therapists) becomes essential. This study will focus on the present situation of the cooperation between nurses and therapists, especially the speech-language-hearing therapists, and aim to elucidate what develops there. A semi-structured interview was conducted targeted at a physical therapist having practical experience in working in collaboration with nurses. The contents of the interview were transcribed and converted to data, and the data was encoded and categorized with sequentially increasing degrees of abstraction to conduct a qualitative explorative factor analysis of the data. When providing ethical explanations, particular care was taken to ensure that participants would not be subjected to any disadvantages as a result of participating in the study. In addition, they were also informed that their privacy would be ensured and that they have the right to decline to participate in the study. In addition, they were also informed that the results of the study would be announced publicly at an applicable nursing academic conference. This study has been approved following application to the ethical committee of the university with which the researchers are affiliated. The survey participant is a female speech-language-hearing therapist in her forties. As a result of the analysis, 6 categories were extracted consisting of 'measures to address appetite and aspiration pneumonia prevention', 'limitation of the care a therapist alone could provide', 'the all-inclusive patient- supportive care provided by nurses', 'expand the beneficial cooperation with nurses', 'providing education for nurses on the swallowing function utilizing videofluoroscopic examination of swallowing', 'enhancement of communication including conferences'. In order to improve the team performance, and for the teamwork competency necessary for the provision of safer care, mutual support is essential. As for the cooperation between nurses and therapists, this survey indicates that the maturing of the cooperation between professionals in order to improve nursing professionals' knowledge and enhance communication will lead to an improvement in the quality of the rehabilitation for cancer.Keywords: cancer rehabilitation, nurses, speech-language-hearing therapists, interprofessional work
Procedia PDF Downloads 134675 The Role of Medical Professionals in Imparting Drug Abuse Education to Secondary School Children
Authors: Hana Ashique, Florence Onabanjo
Abstract:
Objectives: Research on drug abuse education in secondary schools has highlighted the discrepancy between drug policies and practice. Drug abuse is closely associated with child mental health, and with increasing drug overdose deaths in the UK, approximately doubling in the last 30 years, it becomes important to revolutionise drug abuse education. Medical professionals from the University of Nottingham piloted a drug abuse workshop at a state school in Nottingham for children between the age of 14-15 years. An interactive and educational approach was implemented, which explained addiction from a medical perspective. The workshop aimed to debunk medical beliefs children harboured about drugs and to support children in making informed drug choices. Methods: The sample group consisted of six cohorts of 30 children from year 10. The workshop was delivered in three segments to each cohort. In the first segment, the children were introduced to the physiological mechanisms behind drug dependence and reward pathways. The second segment consisted of interactive discussions between the children and medical professionals. This also involved conversations between the children about their perspectives on drug abuse, thereby co-creating knowledge. The third segment used art to incorporate storytelling from the perspective of a year ten child. This exercise investigated the causes that led children to abuse drugs. A feedback questionnaire was distributed among the children to analyse the impact of the workshop. Results: The children answered eight questions. 56% agreed/strongly agreed that they found being taught by medical professionals effective. 50% disagreed, strongly disagreed, or felt neutral that they had received sufficient education about drug abuse previously. Notably, 20% agreed that they feel more likely to ask for help from a medical professional or organisation if they need it. Conclusion: The results highlighted the relevance of medical professionals to function as peer educators in drug abuse education to secondary school children. This would build trust between children and the medical profession within the community. However, a minority proportion of children showed keenness to seek support from medical professionals or organisations for their mental health if they needed it. This exposed the anxiety children have in coming forward to seek professional help. In order to work towards a child-centred approach, educational policies and practices need to align. Similar workshops and research may need to be conducted to expose different perspectives toward drug abuse education.Keywords: adolescent mental health, evidence-based teaching, drug abuse awareness, medical professional led workshops
Procedia PDF Downloads 22674 The Extraction of Sage Essential Oil and the Improvement of Sleeping Quality for Female Menopause by Sage Essential Oil
Authors: Bei Shan Lin, Tzu Yu Huang, Ya Ping Chen, Chun Mel Lu
Abstract:
This research is divided into two parts. The first part is to adopt the method of supercritical carbon dioxide fluid extraction to extract sage essential oil (Salvia officinalis) and to find out the differences when the procedure is under different pressure conditions. Meanwhile, this research is going to probe into the composition of the extracted sage essential oil. The second part will talk about the effect of the aromatherapy with extracted sage essential oil to improve the sleeping quality for women in menopause. The extracted sage substance is tested by inhibiting DPPH radical to identify its antioxidant capacity, and the extracted component was analyzed by gas chromatography-mass spectrometer. Under two different pressure conditions, the extracted experiment gets different results. By 3000 psi, the extracted substance is IC50 180.94mg/L, which is higher than IC50 657.43mg/L by 1800 psi. By 3000 psi, the extracted yield is 1.05%, which is higher than 0.68% by 1800 psi. Through the experimental data, the researcher also can conclude that the extracted substance with 3000psi contains more materials than the one with 1800 psi. The main overlapped materials are the compounds of cyclic ether, flavonoid, and terpenes. Cyclic ether and flavonoids have the function of soothing and calming. They can be applied to relieve cramps and to eliminate menopause disorders. The second part of the research is to apply extracted sage essential oil to aromatherapy for women who are in menopause and to discuss the effect of the improvement for the sleeping quality. This research adopts the approaching of Swedish upper back massage, evaluates the sleeping quality with the Pittsburgh Sleep Quality Index, and detects the changes with heart rate variability apparatus. The experimental group intervenes with extracted sage essential oil to the aromatherapy. The average heart beats detected by the apparatus has a better result in SDNN, low frequency, and high frequency. The performance is better than the control group. According to the statistical analysis of the Pittsburgh Sleep Quality Index, this research has reached the effect of sleep quality improvement. It proves that extracted sage essential oil has a significant effect on increasing the activities of parasympathetic nerves. It is able to improve the sleeping quality for women in menopauseKeywords: supercritical carbon dioxide fluid extraction, Salvia officinalis, aromatherapy, Swedish massage, Pittsburgh sleep quality index, heart rate variability, parasympathetic nerves
Procedia PDF Downloads 121673 Phytochemical Screening and Assessment of Hepatoprotective Activity of Geigeria alata Leaves Ethanolic Extract on Wistar Rats
Authors: Girgis Younan, Ikram Eltayeb
Abstract:
Geigeria alata belongs to the family Asteraceae, is an effective plant traditionally used in Sudan as a therapy for hepatic disease and as an antiepileptic, antispasmodic and to treat cough and intestinal complaints.The liver is responsible for many critical functions within the body and any liver disease or injury will result in the loss of those functions leading to significant damage in the body. Liver diseases cause increase in liver enzymes (AST, ALP ALT) and total bilirubin and a decrease in total blood protein level. The objective of this study is to investigate the hepato-protective activity of Geigeria alata leaves ethanolic extract. The plant leaves were extracted using 96% ethanol using Soxhlet apparatus. The hepatoprotective effect was determined using 25 wistar rats, the rats was divided to 5 groups, each group contain 5 rats: [Normal control group] receiving purified water, liver damage was induced in wistar rats by administering a 1:1 (v/v) mixture of CCl4 (1.25 ml/kg) and olive oil once at day four of the experiment [negative control group]. Two doses of extract [400mg/kg and 200mg/kg] was applied daily for 7 days, and standard drug Silymarin (200 mg/kg) were administered daily for 7 days to CCl4-treated rats. The degree of hepato-protective activity was evaluated by determining the hepatic marker enzymes AST, ALP, ALT, total Bilirubin and total proteins (TP). Results have shown that, the extract of G.alata leaves reduced the level of liver enzymes ALT, AST, ALP, total bilirubin and increased the level of total proteins. Since the levels of liver enzymes; bilirubin and total protein are considered as markers of liver function, the extract has proven to reduce the detrimental effects of liver toxicity induced using CCl4. The hepato-protective effect of extract on liver was found to be dose dependent, where the 400mg/kg dose of the extract exhibited higher activity than 200mg/kg dose. In addition, the effect of the higher dose (400mg/kg) of the extract was found to be higher than Silymarin standard drug. The result concludes that, G.alata leaves extract was found to exhibit profound hepato-protective activity, which justifies the traditional use of the plant for the treatment of hepatic diseases.Keywords: alata, extract, geigeria, hepatoprotective
Procedia PDF Downloads 235672 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 144671 A Strategic Water and Energy Project as a Climate Change Adaptation Tool for Israel, Jordan and the Middle East
Authors: Doron Markel
Abstract:
Water availability in most of the Middle East (especially in Jordan) is among the lowest in the world and has been even further exacerbated by the regional climatic change and the reduced rainfall. The Araba Valley in Israel is disconnected from the national water system. On the other hand, the Araba Valley, both in Israel and Jordan, is an excellent area for solar energy gaining. The Dead Sea (Israel and Jordan) is a hypersaline lake which its level declines at a rate of more than 1 m/y. The decline stems from the increasing use of all available freshwater resources that discharge into the Dead Sea and decreasing natural precipitation due to climate change in the Middle East. As an adaptation tool for this humanmade and Climate Change results, a comprehensive water-energy and environmental project were suggested: The Red Sea-Dead Sea Conveyance. It is planned to desalinate the Red Sea water, supply the desalinated water to both Israel and Jordan, and convey the desalination brine to the Dead Sea to stabilize its water level. Therefore, the World Bank had led a multi-discipline feasibility study between 2008 and 2013, that had mainly dealt with the mixing of seawater and Dead Sea Water. The possible consequences of such mixing were precipitation and possible suspension of secondary Gypsum, as well as blooming of Dunaliella red algae. Using a comprehensive hydrodynamic-geochemical model for the Dead Sea, it was predicted that while conveying up to 400 Million Cubic Meters per year of seawater or desalination brine to the Dead Sea, the latter would not be stratified as it was until 1979; hence Gypsum precipitation and algal blooms would be neglecting. Using another hydrodynamic-biological model for the Red Sea, it was predicted the Seawater pump from the Gulf of Eilat would not harm the ecological system of the gulf (including the sensitive coral reef), giving a pump depth of 120-160 m. Based on these studies, a pipeline conveyance was recommended to convey desalination brine to the Dead Sea with the use of a hydropower plant, utilizing the elevation difference of 400 m between the Red Sea and the Dead Sea. The complementary energy would come from solar panels coupled with innovative storage technology, needed to produce a continuous energy production for an appropriate function of the desalination plant. The paper will describe the proposed project as well as the feasibility study results. The possibility to utilize this water-energy-environmental project as a climate change adaptation strategy for both Israel and Jordan will also be discussed.Keywords: Red Sea, Dead Sea, water supply, hydro-power, Gypsum, algae
Procedia PDF Downloads 114670 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)
Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy
Abstract:
The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT
Procedia PDF Downloads 425669 Comparative Analysis of the Impact of Urbanization on Land Surface Temperature in the United Arab Emirates
Authors: A. O. Abulibdeh
Abstract:
The aim of this study is to investigate and compare the changes in the Land Surface Temperature (LST) as a function of urbanization, particularly land use/land cover changes, in three cities in the UAE, mainly Abu Dhabi, Dubai, and Al Ain. The scale of this assessment will be at the macro- and micro-levels. At the macro-level, a comparative assessment will take place to compare between the four cities in the UAE. At the micro-level, the study will compare between the effects of different land use/land cover on the LST. This will provide a clear and quantitative city-specific information related to the relationship between urbanization and local spatial intra-urban LST variation in three cities in the UAE. The main objectives of this study are 1) to investigate the development of LST on the macro- and micro-level between and in three cities in the UAE over two decades time period, 2) to examine the impact of different types of land use/land cover on the spatial distribution of LST. Because these three cities are facing harsh arid climate, it is hypothesized that (1) urbanization is affecting and connected to the spatial changes in LST; (2) different land use/land cover have different impact on the LST; and (3) changes in spatial configuration of land use and vegetation concentration over time would control urban microclimate on a city scale and control macroclimate on the country scale. This study will be carried out over a 20-year period (1996-2016) and throughout the whole year. The study will compare between two distinct periods with different thermal characteristics which are the cool/cold period from November to March and warm/hot period between April and October. The best practice research method for this topic is to use remote sensing data to target different aspects of natural and anthropogenic systems impacts. The project will follow classical remote sensing and mapping techniques to investigate the impact of urbanization, mainly changes in land use/land cover, on LST. The investigation in this study will be performed in two stages. Stage one remote sensing data will be used to investigate the impact of urbanization on LST on a macroclimate level where the LST and Urban Heat Island (UHI) will be compared in the three cities using data from the past two decades. Stage two will investigate the impact on microclimate scale by investigating the LST and UHI using a particular land use/land cover type. In both stages, an LST and urban land cover maps will be generated over the study area. The outcome of this study should represent an important contribution to recent urban climate studies, particularly in the UAE. Based on the aim and objectives of this study, the expected outcomes are as follow: i) to determine the increase or decrease of LST as a result of urbanization in these four cities, ii) to determine the effect of different land uses/land covers on increasing or decreasing the LST.Keywords: land use/land cover, global warming, land surface temperature, remote sensing
Procedia PDF Downloads 248668 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264
Authors: V. Ziegler, F. Schneider, M. Pesch
Abstract:
With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection
Procedia PDF Downloads 151667 The Effect of Bisphenol A and Its Selected Analogues on Antioxidant Enzymes Activity in Human Erythrocytes
Authors: Aneta Maćczak, Bożena Bukowska, Jaromir Michałowicz
Abstract:
Bisphenols are one of the most widely used chemical compounds worldwide. They are used in the manufacturing of polycarbonates, epoxy resins and thermal paper which are applied in plastic containers, bottles, cans, newspapers, receipt and other products. Among these compounds, bisphenol A (BPA) is produced in the highest amounts. There are concerns about endocrine impact of BPA and its other toxic effects including hepatotoxicity, neurotoxicity and carcinogenicity on human organism. Moreover, BPA is supposed to increase the incidence the obesity, diabetes and heart disease. For this reason the use of BPA in the production of plastic infant feeding bottles and some other consumers products has been restricted in the European Union and the United States. Nowadays, BPA analogues like bisphenol F (BPF) and bisphenol S (BPS) have been developed as alternative compounds. The replacement of BPA with other bisphenols contributed to the increase of the exposure of human population to these substances. Toxicological studies have mainly focused on BPA. In opposite, a small number of studies concerning toxic effects of BPA analogues have been realized, which makes impossible to state whether those substituents are safe for human health. Up to now, the mechanism of bisphenols action on the erythrocytes has not been elucidated. That is why, the aim of this study was to assess the effect of BPA and its selected analogues such as BPF and BPS on the activity of antioxidant enzymes, i.e. catalase (EC 1.11.1.6.), glutathione peroxidase (E.C.1.11.1.9) and superoxide dismutase (EC.1.15.1.1) in human erythrocytes. Red blood cells in respect to their function (transport of oxygen) and very well developed enzymatic and non-enzymatic antioxidative system, are useful cellular model to assess changes in redox balance. Erythrocytes were incubated with BPA, BPF and BPS in the concentration ranging from 0.5 to 100 µg/ml for 24 h. The activity of catalase was determined by the method of Aebi (1984). The activity of glutathione peroxidase was measured according to the method described by Rice-Evans et al. (1991), while the activity of superoxide dismutase (EC.1.15.1.1) was determined by the method of Misra and Fridovich (1972). The results showed that BPA and BPF caused changes in the antioxidative enzymes activities. BPA decreased the activity of examined enzymes in the concentration of 100 µg/ml. We also noted that BPF decreased the activity of catalase (5-100 µg/ml), glutathione peroxidase (50-100 µg/ml) and superoxide dismutase (25-100 µg/ml), while BPS did not cause statistically significant changes in investigated parameters. The obtained results suggest that BPA and BPF disrupt redox balance in human erythrocytes but the observed changes may occur in human organism only during occupational or subacute exposure to these substances.Keywords: antioxidant enzymes, bisphenol A, bisphenol a analogues, human erythrocytes
Procedia PDF Downloads 471666 Oleic Acid Enhances Hippocampal Synaptic Efficacy
Authors: Rema Vazhappilly, Tapas Das
Abstract:
Oleic acid is a cis unsaturated fatty acid and is known to be a partially essential fatty acid due to its limited endogenous synthesis during pregnancy and lactation. Previous studies have demonstrated the role of oleic acid in neuronal differentiation and brain phospholipid synthesis. These evidences indicate a major role for oleic acid in learning and memory. Interestingly, oleic acid has been shown to enhance hippocampal long term potentiation (LTP), the physiological correlate of long term synaptic plasticity. However the effect of oleic acid on short term synaptic plasticity has not been investigated. Short term potentiation (STP) is the physiological correlate of short term synaptic plasticity which is the key underlying molecular mechanism of short term memory and neuronal information processing. STP in the hippocampal CA1 region has been known to require the activation of N-methyl-D-aspartate receptors (NMDARs). The NMDAR dependent hippocampal STP as a potential mechanism for short term memory has been a subject of intense interest for the past few years. Therefore in the present study the effect of oleic acid on NMDAR dependent hippocampal STP was determined in mouse hippocampal slices (in vitro) using Multi-electrode array system. STP was induced by weak tetanic Stimulation (one train of 100 Hz stimulations for 0.1s) of the Schaffer collaterals of CA1 region of the hippocampus in slices treated with different concentrations of oleic acid in presence or absence of NMDAR antagonist D-AP5 (30 µM) . Oleic acid at 20 (mean increase in fEPSP amplitude = ~135 % Vs. Control = 100%; P<0.001) and 30 µM (mean increase in fEPSP amplitude = ~ 280% Vs. Control = 100%); P<0.001) significantly enhanced the STP following weak tetanic stimulation. Lower oleic acid concentrations at 10 µM did not modify the hippocampal STP induced by weak tetanic stimulation. The hippocampal STP induced by weak tetanic stimulation was completely blocked by the NMDA receptor antagonist D-AP5 (30µM) in both oleic acid and control treated hippocampal slices. This lead to the conclusion that the hippocampal STP elicited by weak tetanic stimulation and enhanced by oleic acid was NMDAR dependent. Together these findings suggest that oleic acid may enhance the short term memory and neuronal information processing through the modulation of NMDAR dependent hippocampal short-term synaptic plasticity. In conclusion this study suggests the possible role of oleic acid to prevent the short term memory loss and impaired neuronal function throughout development.Keywords: oleic acid, short-term potentiation, memory, field excitatory post synaptic potentials, NMDA receptor
Procedia PDF Downloads 336665 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 99