Search results for: double nonlinear predictive controller
434 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% Using Patch Test
Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Mehta Suyog, Rohira Poonam
Abstract:
Minoxidil is used topically to help hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare irritation potential of three conventional formulation of minoxidil 5% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 56 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulation of minoxidil 5% topical solution. Products tested included aqueous based minoxidil 5% (AnasureTM 5%, Sun Pharma, India – Brand A), alcohol based minoxidil 5% (Brand B) and aqueous based minoxidil 5% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using Draize scale (0-4 points scale for erythema/wrinkles/dryness and for oedema). A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and score above 4.0/8.0 indicates “irritant”. Follow-up was scheduled after one week to confirm recovery for any reaction. The procedure of the patch test followed the principles outlined by Bureau of Indian standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty six subjects with mean age 30.9 years (27 males and 29 females) participated in the study. The combined mean score (± standard deviation) were: 0.13 ± 0.33 (Brand A), 0.39 ± 0.49 (Brand B), 0.22 ± 0.41 (Brand C), 2.91 ± 0.79 (Positive control) and 0.02 ± 0.13 (Negative control). The mean score of Brand A (Sun Pharma product) was significantly lower than Brand B (p=0.001) and was comparable with Brand C (p=0.21). The combined mean erythema score (± standard deviation) were: 0.09 ± 0.29 (Brand A), 0.27 ± 0.5 (Brand B), 0.18 ± 0.39 (Brand C), 2.02 ± 0.49 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.01) and was comparable with Brand C (p=0.16). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulation of minoxidil 5% were non-irritant. Brand A of 5% minoxidil (Sun Pharma) was found to be least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011;2018.Keywords: erythema, irritation, minoxidil, patch test
Procedia PDF Downloads 98433 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 108432 The Willingness to Pay of People in Taiwan for Flood Protection Standard of Regions
Authors: Takahiro Katayama, Hsueh-Sheng Chang
Abstract:
Due to the global climate change, it has increased the extreme rainfall that led to serious floods around the world. In recent years, urbanization and population growth also tend to increase the number of impervious surfaces, resulting in significant loss of life and property during floods especially for the urban areas of Taiwan. In the past, the primary governmental response to floods was structural flood control and the only flood protection standards in use were the design standards. However, these design standards of flood control facilities are generally calculated based on current hydrological conditions. In the face of future extreme events, there is a high possibility to surpass existing design standards and cause damages directly and indirectly to the public. To cope with the frequent occurrence of floods in recent years, it has been pointed out that there is a need for a different standard called FPSR (Flood Protection Standard of Regions) in Taiwan. FPSR is mainly used for disaster reduction and used to ensure that hydraulic facilities draining regional flood immediately under specific return period. FPSR could convey a level of flood risk which is useful for land use planning and reflect the disaster situations that a region can bear. However, little has been reported on FPSR and its impacts to the public in Taiwan. Hence, this study proposes a quantity procedure to evaluate the FPSR. This study aimed to examine FPSR of the region and public perceptions of and knowledge about FPSR, as well as the public’s WTP (willingness to pay) for FPSR. The research is conducted via literature review and questionnaire method. Firstly, this study will review the domestic and international research on the FPSR, and provide the theoretical framework of FPSR. Secondly, CVM (Contingent Value Method) has been employed to conduct this survey and using double-bounded dichotomous choice, close-ended format elicits households WTP for raising the protection level to understand the social costs. The samplings of this study are citizens living in Taichung city, Taiwan and 700 samplings were chosen in this study. In the end, this research will continue working on surveys, finding out which factors determining WTP, and provide some recommendations for adaption policies for floods in the future.Keywords: climate change, CVM (Contingent Value Method), FPSR (Flood Protection Standard of Regions), urban flooding
Procedia PDF Downloads 250431 Prevalence of ESBL E. coli Susceptibility to Oral Antibiotics in Outpatient Urine Culture: Multicentric, Analysis of Three Years Data (2019-2021)
Authors: Mazoun Nasser Rashid Al Kharusi, Nada Al Siyabi
Abstract:
Objectives: The main aim of this study is to Find the rate of susceptibility of ESBL E. coli causing UTI to oral antibiotics. Secondary objectives: Prevalence of ESBL E. coli from community urine samples, identify the best empirical oral antibiotics with the least resistance rate for UTI and identify alternative oral antibiotics for testing and utilization. Methods: This study is a retrospective descriptive study of the last three years in five major hospitals in Oman (Khowla Hospital, AN’Nahdha Hospital, Rustaq Hospital, Nizwa Hospital, and Ibri Hospital) equipped with a microbiologist. Inclusion criteria include all eligible outpatient urine culture isolates, excluding isolates from admitted patients with hospital-acquired urinary tract infections. Data was collected through the MOH database. The MOH hospitals are using different types of testing, automated methods like Vitek2 and manual methods. Vitek2 machine uses the principle of the fluorogenic method for organism identification and a turbidimetric method for susceptibility testing. The manual method is done by double disc diffusion for identifying ESBL and the disc diffusion method is for antibiotic susceptibility. All laboratories follow the clinical laboratory science institute (CLSI) guidelines. Analysis was done by SPSS statistical package. Results: Total urine cultures were (23048). E. coli grew in (11637) 49.6% of the urine, whereas (2199) 18.8% of those were confirmed as ESBL. As expected, the resistance rate to amoxicillin and cefuroxime is 100%. Moreover, the susceptibility of those ESBL-producing E. coli to nitrofurantoin, trimethoprim+sulfamethoxazole, ciprofloxacin and amoxicillin-clavulanate is progressing over the years; however, still low. ESBL E. coli was predominating in the female gender and those aged 66-74 years old throughout all the years. Other oral antibiotic options need to be explored and tested so that we add to the pool of oral antibiotics for ESBL E. coli causing UTI in the community. Conclusion: High rate of ESBL E. coli in urine from the community. The high resistance rates to oral antibiotics highlight the need for alternative treatment options for UTIs caused by these bacteria. Further research is needed to identify new and effective treatments for UTIs caused by ESBL-E. Coli.Keywords: UTI, ESBL, oral antibiotics, E. coli, susceptibility
Procedia PDF Downloads 93430 The Representation of the Medieval Idea of Ugliness in Messiaen's Saint François d’Assise
Authors: Nana Katsia
Abstract:
This paper explores the ways both medieval and medievalist conceptions of ugliness might be linked to the physical and spiritual transformation of the protagonists and how it is realised through specific musical rhythm, such as the dochmiac rhythm in the opera. As Eco and Henderson note, only one kind of ugliness could be represented in conformity with nature in the Middle Ages without destroying all aesthetic pleasure and, in turn, artistic beauty: namely, a form of ugliness which arouses disgust. Moreover, Eco explores the fact that the enemies of Christ who condemn, martyr, and crucify him are represented as wicked inside. In turn, the representation of inner wickedness and hostility toward God brings with it outward ugliness, coarseness, barbarity, and rage. Ultimately these result in the deformation of the figure. In all these regards, the non-beautiful is represented here as a necessary phase, which is not the case with classical (the ancient Greek) concepts of Beauty. As we can see, the understanding of disfigurement and ugliness in the Middle Ages was both varied and complex. In the Middle Ages, the disfigurement caused by leprosy (and other skin and bodily conditions) was interpreted, in a somewhat contradictory manner, as both a curse and a gift from God. Some saints’ lives even have the saint appealing to be inflicted with the disease as part of their mission toward true humility. We shall explore that this ‘different concept’ of ugliness (non-classical beauty) might be represented in Messiaen’s opera. According to Messiaen, the Leper and Saint François are the principal characters of the third scene, as both of them will be transformed, and a double miracle will take place in the process. Messiaen mirrors the idea of the true humility of Saint’s life and positions Le Baiser au Lépreux as the culmination of the first act. The Leper’s character represents his physical and spiritual disfigurement, which are healed after the miracle. So, the scene can be viewed as an encounter between beauty and ugliness, and that much of it is spent in a study of ugliness. Dochmiac rhythm is one of the most important compositional elements in the opera. It plays a crucial role in the process of creating a dramatic musical narrative and structure in the composition. As such, we shall explore how Messiaen represents the medieval idea of ugliness in the opera through particular musical elements linked to the main protagonists’ spiritual or physical ugliness; why Messiaen makes reference to dochmiac rhythm, and how they create the musical and dramatic context in the opera for the medieval aesthetic category of ugliness.Keywords: ugliness in music, medieval time, saint françois d’assise, messiaen
Procedia PDF Downloads 146429 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions
Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran
Abstract:
Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators
Procedia PDF Downloads 179428 Utilising Indigenous Knowledge to Design Dykes in Malawi
Authors: Martin Kleynhans, Margot Soler, Gavin Quibell
Abstract:
Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi
Procedia PDF Downloads 284427 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 112426 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel
Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray
Abstract:
The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect
Procedia PDF Downloads 348425 Development of a Table-Top Composite Wire Fabrication System for Additive Manufacturing
Authors: Krishna Nand, Mohammad Taufik
Abstract:
Fused Filament Fabrication (FFF) is one of the most popular additive manufacturing (AM) technology. In FFF technology, a wire form material (filament) is fed inside a heated chamber, where it gets converted into semi-solid form and extruded out of a nozzle to be deposited on the build platform to fabricate the part. FFF technology is expanding and covering the market at a very rapid rate, so the need of raw materials for 3D printing is also increasing. The cost of 3D printing is directly affected by filament cost. To make 3D printing more economic, a compact and portable filament/wire extrusion system is needed. Wire extrusion systems to extrude ordinary wire/filament made of a single material are available in the market. However, extrusion system to make a composite wire/filament are not available. Hence, in this study, initial efforts have been made to develop a table-top composite wire extruder. The developed system is consisted of mechanical parts, electronics parts, and a control system. A multiple channel hopper, extrusion screw, melting chamber and nozzle, cooling zone, and spool winder are some mechanical parts. While motors, heater, temperature sensor, cooling fans are some electronics parts, which are used to develop this system. A control board has been used to control the various process parameters like – temperature and speed of motors. For the production of composite wire/filament, two different materials could be fed through two channels of hopper, which will be mixed and carried to the heated zone by extrusion screw. The extrusion screw is rotated by a motor, and the speed of this motor will be controlled by the controller as per the requirement of material extrusion rate. In the heated zone, the material will melt with the help of a heating element and extruded out of the nozzle in the form of wire. The developed system occupies less floor space due to the vertical orientation of its heating chamber. It is capable to extrude ordinary filament as well as composite filament, which are compatible with 3D printers available in the market. Further, the developed system could be employed in the research and development of materials, processing, and characterization for 3D printer. The developed system presented in this study could be a better choice for hobbyists and researchers dealing with the fused filament fabrication process to reduce the 3D printing cost significantly by recycling the waste material into 3D printer feed material. Further, it could also be explored as a better alternative for filament production at the commercial level.Keywords: additive manufacturing, 3D Printing, filament extrusion, pellet extrusion
Procedia PDF Downloads 169424 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 128423 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 326422 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358421 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 225420 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly
Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota
Abstract:
Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.Keywords: creatine, dietetic supplement, elderly, resistance training
Procedia PDF Downloads 474419 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence
Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej
Abstract:
In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction
Procedia PDF Downloads 106418 Seismic Behavior and Loss Assessment of High–Rise Buildings with Light Gauge Steel–Concrete Hybrid Structure
Authors: Bing Lu, Shuang Li, Hongyuan Zhou
Abstract:
The steel–concrete hybrid structure has been extensively employed in high–rise buildings and super high–rise buildings. The light gauge steel–concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a new–type steel–concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high–rise buildings with three different concrete hybrid structures were investigated through finite element software, respectively. The three concrete hybrid structures are reinforced concrete column–steel beam (RC‒S) hybrid structure, concrete–filled steel tube column–steel beam (CFST‒S) hybrid structure, and tubed concrete column–steel beam (TC‒S) hybrid structure. The nonlinear time-history analysis of three high–rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high–rise buildings were superior. Under extremely rare earthquakes, the maximum inter–storey drifts of three high–rise buildings are significantly lower than 1/50. The inter–storey drift and floor acceleration of high–rise building with CFST‒S hybrid structure were bigger than those of high–rise buildings with RC‒S hybrid structure, and smaller than those of high–rise building with TC‒S hybrid structure. Then, based on the time–history analysis results, the post-earthquake repair cost ratio and repair time of three high–rise buildings were predicted through an economic performance analysis method proposed in FEMA‒P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC‒S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.Keywords: seismic behavior, loss assessment, light gauge steel–concrete hybrid structure, high–rise building, time–history analysis
Procedia PDF Downloads 187417 Analysis and Design of Offshore Met Mast Supported on Jacket Substructure
Authors: Manu Manu, Pardha J. Saradhi, Ramana M. V. Murthy
Abstract:
Wind Energy is accepted as one of the most developed, cost effective and proven renewable energy technologies to meet increasing electricity demands in a sustainable manner. Preliminary assessment studies along Indian Coastline by Ministry of New and Renewable Energy have indicated prospects for development of offshore wind power along Tamil Nadu Coast, India. The commercial viability of a wind project mainly depends on wind characteristics on site. Hence, it is internationally recommended to perform site-specific wind resource assessment based on two years’ wind profile as a part of the feasibility study. Conventionally, guy wire met mast are used onshore for the collection of wind profile. Installation of similar structure in offshore requires complex marine spread and are very expensive. In the present study, an attempt is made to develop 120 m long lattice tower supported on the jacket, piled to the seabed at Rameshwaram, Tamil Nadu, India. Offshore met-masts are subjected to combined wind and hydrodynamic loads, and these lateral loads should be safely transferred to soil. The wind loads are estimated based on gust factor method, and the hydrodynamic loads are estimated by Morison’s equation along with suitable wave theory. The soil is modeled as three nonlinear orthogonal springs based on API standards. The structure configuration and optimum member sizes are obtained for extreme cyclone events. The dynamic behavior of mast under coupled wind and wave loads is also studied. The static responses of a mast with jacket type offshore platform have been studied using a frame model in SESAM. It is found from the study that the maximum displacement at the top of the mast for the random wave is 0.003 m and that of the tower for wind is 0.08 m during the steady state. The dynamic analysis results indicate that the structure is safe against coupled wind and wave loading.Keywords: offshore wind, mast, static, aerodynamic load, hydrodynamic load
Procedia PDF Downloads 217416 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 101415 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 96414 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 309413 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects
Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid
Abstract:
The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.Keywords: fano resonance, optical antenna, plasmonic, nano-clusters
Procedia PDF Downloads 430412 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria
Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene
Abstract:
As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.Keywords: capacity, energy, power system, storage
Procedia PDF Downloads 38411 The Closed Cavity Façade (CCF): Optimization of CCF for Enhancing Energy Efficiency and Indoor Environmental Quality in Office Buildings
Authors: Michalis Michael, Mauro Overend
Abstract:
Buildings, in which we spend 87-90% of our time, act as a shelter protecting us from environmental conditions and weather phenomena. The building's overall performance is significantly dependent on the envelope’s glazing part, which is particularly critical as it is the most vulnerable part to heat gain and heat loss. However, conventional glazing technologies have relatively low-performance thermo-optical characteristics. In this regard, during winter, the heat losses due to the glazing part of a building envelope are significantly increased as well as the heat gains during the summer period. In this study, the contribution of an innovative glazing technology, namely Closed Cavity Façade (CCF) in improving energy efficiency and IEQ in office buildings is examined, aiming to optimize various design configurations of CCF. Using Energy Plus and IDA ICE packages, the performance of several CCF configurations and geometries for various climate types were investigated, aiming to identify the optimum solution. The model used for the simulations and optimization process was MATELab, a recently constructed outdoor test facility at the University of Cambridge (UK). The model was previously experimentally calibrated. The study revealed that the use of CCF technology instead of conventional double or triple glazing leads to important benefits. Particularly, the replacement of the traditional glazing units, used as the baseline, with the optimal configuration of CCF led to a decrease in energy consumption in the range of 18-37% (depending on the location). This mainly occurs due to integrating shading devices in the cavity and applying proper glass coatings and control strategies, which lead to improvement of thermal transmittance and g-value of the glazing. Since the solar gain through the façade is the main contributor to energy consumption during cooling periods, it was observed that a higher energy improvement is achieved in cooling-dominated locations. Furthermore, it was shown that a suitable selection of the constituents of a closed cavity façade, such as the colour and type of shading devices and the type of coatings, leads to an additional improvement of its thermal performance, avoiding overheating phenomena and consequently ensuring temperatures in the glass cavity below the critical value, and reducing the radiant discomfort providing extra benefits in terms of Indoor Environmental Quality (IEQ).Keywords: building energy efficiency, closed cavity façade, optimization, occupants comfort
Procedia PDF Downloads 65410 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales
Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias
Abstract:
Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline
Procedia PDF Downloads 80409 Traumatic Events, Post-traumatic Symptoms, Personal Resilience, Quality of Life, and Organizational Com Mitment Among Midwives: A Cross-Sectional Study
Authors: Kinneret Segal
Abstract:
The work of a midwife is emotionally challenging, both positively and negatively. Midwives share moments of joy when a baby is welcomed into the world, and also attend difficult events of loss and trauma. The relationship that develops with the maternity is the essence of the midwife's care, and it is a fundamental source of motivation and professional satisfaction. This close relationship with the maternity may be used as a double-edged sword in cases of exposure to traumatic events at birth. Birth problems, exposure to emergencies and traumatic events, and loss can affect the professional quality of life and the Compassion satisfaction of the midwife. It seems that the issue of traumatic experiences in the work of midwives, has not been sufficiently explored. The present study examined the associations between exposure to traumatic events, personal resilience and post-traumatic symptoms, professional quality of life and organizational commitment among midwifery nurses in Israeli hospitals. 131 midwives from three hospitals in the country's center in Israel participated in this study. The data were collected during 2021 using a self-report questionnaire that examined sociodemographic characteristics, the degree of exposure to traumatic events in the delivery room, personal resilience, post-traumatic symptoms, professional quality of life, and organizational commitment. The three most difficult traumatic events for the midwives were death or fear of death of a newborn, death or fear of the death of a mother and a quiet birth. The higher the frequency of exposure to traumatic events, the more numerous and intense the onset of post-trauma symptoms. The more numerous and powerful the post-trauma symptoms, the higher the level of professional burnout and/or compassion fatigue, and the lower the level of compassion satisfaction. High levels of compassion satisfaction and/or low professional burnout were expressed in a heightened sense of organizational commitment. Personal resilience, country of birth, traumatic symptoms and organizational commitment, predicted satisfaction from compassion. Midwives are exposed to traumatic events associated with dissatisfaction and impairment of the professional quality of life that accompanies burnout and compassion fatigue. Exposure to traumatic events leads to the appearance of traumatic symptoms, a decrease in organizational commitment, and psychological and mental well-being. The issue needs to be addressed by implementing training programs, organizational support, and policies to improving well-being and quality of care among midwives.Keywords: traumatic experirnces, midwives, quality of life, burnout, organizational commitment, personal resilience
Procedia PDF Downloads 87408 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 33407 The Effect of Nanotechnology Structured Water on Lower Urinary Tract Symptoms in Men with Benign Prostatic Hyperplasia: A Double-Blinded Randomized Study
Authors: Ali Kamal M. Sami, Safa Almukhtar, Alaa Al-Krush, Ismael Hama-Amin Akha Weas, Ruqaya Ahmed Alqais
Abstract:
Introduction and Objectives Lower urinary tract symptoms (LUTS) are common among men with benign prostatic hyperplasia (BPH). The combination of 5 alpha-reductase inhibitors and alpha-blockers has been used as a conservative treatment of male LUTS secondary to BPH. Nanotechnology structured water magnalife is a type of water that is produced by modulators and specific frequency and energy fields that transform ordinary water into this Nanowater. In this study, we evaluated the use of Nano-water with the conservative treatment and to see if it improves the outcome and gives better results in those patients with LUTS/BPH. Material and methods For a period of 3 months, 200 men with International Prostate Symptom Score (IPSS)≥13, maximum flow rate (Qmax)≤ 15ml/s, and prostate volume > 30 and <80 ccs were randomly divided into two groups. Group A 100 men were given Nano-water with the (tamsulosindutasteride) and group B 100 men were given ordinary bottled water with the (tamsulosindutasteride). The water bottles were unlabeled and were given in a daily dose of 20ml/kg body weight. Dutasteride 0.5mg and tamsulosin 0.4 mg daily doses. Both groups were evaluated for the IPSS, Qmax, Residual Urine (RU), International Index of Erectile Function–Erectile Function (IIEF-EF) domain at the beginning (baseline data), and at the end of the 3 months. Results Of the 200 men with LUTS who were included in this study, 193 men were followed, and 7 men dropped out of the study for different reasons. In group A which included 97 men with LUTS, IPSS decreased by 16.82 (from 20.47 to 6.65) (P<0.00001) and Qmax increased by 5.73 ml/s (from 11.71 to 17.44) (P<0.00001) and RU <50 ml in 88% of patients (P<0.00001) and IIEF-EF increased to 26.65 (from 16.85) (P<0.00001). While in group B, 96 men with LUTS, IPSS decreased by 8.74(from 19.59 to 10.85)(P<0.00001) and Qmax increased by 4.67 ml/s(from 10.74 to 15.41)(P<0.00001), RU<50 ml in 75% of patients (P<0.00001), and IIEF-EF increased to 21(from 15.87)(P<0.00001). Group A had better results than group B. IPSS in group A decreased to 6.65 vs 10.85 in group B(P<0.00001), also Qmax increased to 17.44 in group A vs 15.41 in group B(P<0.00001), group A had RU <50 ml in 88% of patients vs 75% of patients in group B(P<0.00001).Group A had better IIEF-EF which increased to 26.65 vs 21 in group B(P<0.00001). While the differences between the baseline data of both groups were statistically not significant. Conclusion The use of nanotechnology structured water magnalife gives a better result in terms of LUTS and scores in patients with BPH. This combination is showing improvements in IPSS and even in erectile function in those men after 3 months.Keywords: nano water, lower urinary tract symptoms, benign prostatic hypertrophy, erectile dysfunction
Procedia PDF Downloads 73406 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% + Finasteride 0.1% Using Patch Test
Authors: Joshi Rajiv, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog
Abstract:
Topical formulation containing minoxidil and finasteride helps hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare the irritation potential of three conventional formulations of minoxidil 5% + finasteride 0.1% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 53 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulations of minoxidil 5% + finasteride 0.1% topical solution. Products tested included aqueous based minoxidil 5% + finasteride 0.1% (AnasureTM-F, Sun Pharma, India – Brand A), lipid based minoxidil 5% + finasteride 0.1% (Brand B) and aqueous based minoxidil 5% + finasteride 0.1% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24 hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using the Draize scale (0-4 points scale for erythema/dryness//wrinkles and for oedema). Follow-up was scheduled after one week to confirm recovery for any reaction. A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. The procedure of the patch test followed the principles outlined by the Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty three subjects with mean age 31.9 years (25 males and 28 females) participated in the study. The combined mean score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.92 ± 0.47 (positive control) and 0.0 ± 0.0 (Negative control). This means the score of Brand A (Sun Pharma product) was significantly lower than that of Brand B (p=0.001) and that of Brand C (p=0.001). The combined mean erythema score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.09 ± 0.4 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.001) and that of Brand C (p=0.001). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulations of minoxidil 5% + finasteride 0.1% were non-irritant. Brand A of minoxidil 5% + finasteride 0.1% (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011:2018Keywords: erythema, finasteride, irritation, minoxidil, patch test
Procedia PDF Downloads 85405 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68