Search results for: dynamic motion
210 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study
Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi
Abstract:
Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine
Procedia PDF Downloads 115209 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion
Authors: Ali Kadir, O. Anwar Beg
Abstract:
Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.Keywords: thermal coating, corrosion, ANSYS FEA, CFD
Procedia PDF Downloads 137208 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases
Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner
Abstract:
Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations
Procedia PDF Downloads 117207 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions
Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa
Abstract:
Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.Keywords: cubesat, deorbitation, sail, space, debris
Procedia PDF Downloads 292206 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes
Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert
Abstract:
In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theoryKeywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments
Procedia PDF Downloads 178205 Brand Building in Higher Education: A Grounded Theory Investigation of the Impact of the ‘Positive-Visualization-Course in Brand Identity’ upon Freshmen Student's Perception
Authors: Maria Kountouridou, Dino Domic
Abstract:
Within an increasingly competitive and dynamic environment, the higher education sector is becoming more commodified, with the concept of branding to become exceedingly imperative and an inextricable ingredient for the university’s success. Branding in higher education has proven to be an effective strategy that managed to receive considerable attention in the recent few years, and a growing number of articles have begun to appear in the literature. However, a clear void in the literature confirms that the concept of students’ perceptions towards the university’s brand image has not been researched extensively. An investigation on this central concept is of paramount importance since it will facilitate the development of an inductively generated theoretical model concerning branding in higher education. This research focuses on examining the impact of the ‘positive-visualization-course in brand identity’ upon the perception of freshmen students towards a university’s brand image. A grounded theory methodology has been selected, consisting of semi-structured interviews. Forty-two students have participated in the research, among which twenty-five women and seventeen men. The identification of the sample emerged through the use of the snowball sampling technique. The participants were divided into two groups (experimental and control group) after the researcher had taken into consideration the factor ‘program of study’, to eliminate any possible interaction between the participants of each group. An experiment was carried out where a ‘positive-visualization-course in brand identity’ was conducted among the participants of the experimental group, while the participants of the control group have not been exposed to the course. For the purpose of this research, the term ‘positive-visualization-course in brand identity’ refers to a course where brand history, past achievements/recognitions/awards, its values, and its mission are presented. Prior to the course implementation, face-to-face semi-structured interviews were carried out among the participants of both groups, with the aim of examining the freshmen students’ perceptions towards the university’s brand image. One week after the course implementation, the researcher carried out semi-structured interviews with the participants of the experimental group only in order to identify whether students’ perceptions had been affected after the course completion. Four months after the course completion, semi-structured interviews were carried out among the participants of both groups. Eight months after the course completion, semi-structured interviews were conducted with the aim of identifying the freshmen students’ updated perceptions. Data has been analyzed using substantive coding (open and selective coding), theoretical coding, field memos, and constant comparative analysis. The findings strongly suggest that the ‘positive-visualization-course in brand identity’ can positively affect freshmen students’ perceptions towards a university’s brand image. Additionally, other factors conduce to the formation of perception throughout the months. This study contributes and expands upon the existing literature by presenting an inductively generated theoretical model to guide future research in the links between ‘positive-visualization-course in brand identity’ and the perception of freshmen students towards a university’s brand image.Keywords: brand image, brand name, branding, higher education marketing, perception
Procedia PDF Downloads 178204 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda
Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi
Abstract:
Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.Keywords: land use change, urbanization, Kigali City, Landsat
Procedia PDF Downloads 309203 Dietary Intake and Nutritional Inadequacy Leading to Malnutrition among Children Residing in Shelter Home, Rural Tamil Nadu, India
Authors: Niraimathi Kesavan, Sangeeta Sharma, Deepa Jagan, Sridhar Sukumar, Mohan Ramachandran, Vidhubala Elangovan
Abstract:
Background: Childhood is a dynamic period for growth and development. Optimum nutrition during this period forms a strong foundation for growth, development, resistance to infections, long-term good health, cognition, educational achievements, and work productivity in a later phase of life. Underprivileged children living in a resource constraint settings like shelter homes are at high risk of malnutrition due to poor quality diet and nutritional inadequacy. In low-income countries, underprivileged children are vulnerable to being deprived of nutritious food, which stands as a major challenge in the health sector. The present aims to assess the dietary intake, nutritional status, and nutritional inadequacy and their association with malnutrition among children residing in shelter homes in rural Tamil Nadu. Methods: The study was a descriptive survey conducted among all the children aged between 8-18 years residing in two selected shelter homes (Anbu illam, a home for female children, and Amaidhi illam, a home for male children), rural Tirunelveli, Tamil Nadu, India. A total of 57 children were recruited, including 18 boys and 39 girls, for the study. Dietary intake was measured using seven days 24 hours recall. The average nutrient intake was considered for further analysis. Results: Of the 57 children, about 60% (n=35) were undernutrition. The mean daily energy intake was 1298 (SD 180) kcal for boys and 952 (SD155) kcal for girls. The total calorie intake was 55-60% below the estimated average requirement (EAR) for adolescent boys and girls in the age group 13-15 years and 16-18 years. Carbohydrates were the major source of energy (boys 53% and girls 51%), followed by fat (boys 31.5% and girls 34.5%) and protein (boys 14% and girls 12.9%). Dairy intake (<200ml/day) was less than the recommendation (500ml/day). Micro-nutrient-rich foods such as fruits, vegetables, and green leafy vegetables in the diet were <200g/day, which was far less than the recommended dietary guidelines of 400g- 600g/day for the age group of 7-18 years. Nearly 26% of girls reported experiencing menstrual problems. The majority (76.9%) of the children exhibited nutrient deficiency-related signs and symptoms. Conclusion: The total energy, minerals, and micro-nutrient intake were inadequate and below the Recommended Dietary Allowance for children and adolescents. The diet predominantly consists of refined cereals, rice, semolina, and vermicelli. Consumption of whole grains, milk, fruits, vegetables, and leafy vegetables was far below the recommended dietary guidelines. Dietary inadequacies among these children pose a serious concern for their overall health status and its consequences in the later phase of life.Keywords: adolescents, children, dietary intake, malnutrition, nutritional inadequacy, shelter home
Procedia PDF Downloads 83202 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents
Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin
Abstract:
The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.Keywords: antimicrobial agent, chitosan, green production, nanoparticles
Procedia PDF Downloads 329201 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 265200 Mistletoe Supplementation and Exercise Training on IL-1β and TNF-α Levels
Authors: Alireza Barari, Ahmad Abdi
Abstract:
Introduction: Plyometric training (PT) is popular among individuals involved in dynamic sports, and is executed with a goal to improve muscular performance. Cytokines are considered as immunoregulatory molecules for regulation of immune function and other body responses. In addition, the pro-inflammatory cytokines, TNF-α andIL-1β, have been reported to be increased during and after exercises. If some of the cytokines which cause responses such as inflammation of cells in skeletal muscles, with manipulating of training program or optimizing nutrition, it can be avoided or limited from those injuries caused by cytokines release. Its shows that mistletoe extracts show immune-modulating effects. Materials and methods: present study was to investigate the effect of six weeks PT with or without mistletoe supplementation (MS)(10 mg/kg) on cytokine responses and performance in male basketball players. This study is semi-experimental. Statistic society of this study was basketball player’s male students of Mahmoud Abad city. Statistic samples are concluded of 32 basketball players with an age range of 14–17 years was selected from randomly. Selection of samples in four groups of 8 individuals Participants were randomly assigned to either an experimental group (E, n=16) that performed plyometric exercises with (n=8) or without (n=8) MS, or a control group that rested (C, n=16) with (n=8) or without (n=8) MS. Plants were collected in June from the Mazandaran forest in north of Iran. Then they dried in exposure to air without any exposition to sunlight, on a clean textile. For better drying the plants were high and down until they lost their water. Each subject consumed 10 mg/kg/day of extract for six weeks of intervention. Pre and post-testing was performed in the afternoon of the same day. Blood samples (10 ml) were collected from the intermediate cubital vein of the subjects. Serum concentration of IL-1β and TNF-α were measured by ELISA method. Data analysis was performed using pretest to posttest changes that assessed by t-test for paired samples. After the last plyometric training program, the second blood samples were in the next day. Group differences at baseline were evaluated using One-way ANOVA (post-hock Tukey) test is used for analysis and comparison of three group’s variables. Results: PT with or without MS improved the one repetition maximum leg and chest press, Sargeant test and power in RAST (P < 0.05). However there were no statistically significant differences between groups in Vo2max measures (P > 0.05). PT resulted in a significant increase in plasma IL-1β concentration from 1.08±0.4 mg/ml in pre-training to 1.68±0.18 mg/ml in post-training (P=0.006). While the MS significantly decreased the training-induced increment of IL-1β (P=0.007). In contrast, neither PT nor MS had any effect on TNF-α levels (P > 0.05). Discussion: The results of this investigation indicate that PT improved muscular performance and increases the IL-1β concentration. Increasing of IL-1β after exercise in damaged skeletal muscle has shown of the role of this cytokine in inflammation processes and damaged skeletal muscle repair. However mistletoe supplementation ameliorates the increment of IL-1β levels, indicating the beneficial effect of mistletoe on immune response following plyometric training.Keywords: mistletoe supplementation, training, IL-1β, TNF-α
Procedia PDF Downloads 653199 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System
Authors: Masoud Mirzaee, Ghobad Behzadi Pour
Abstract:
An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure
Procedia PDF Downloads 250198 Environmental Impact of Pallets in the Supply Chain: Including Logistics and Material Durability in a Life Cycle Assessment Approach
Authors: Joana Almeida, Kendall Reid, Jonas Bengtsson
Abstract:
Pallets are devices that are used for moving and storing freight and are nearly omnipresent in supply chains. The market is dominated by timber pallets, with plastic being a common alternative. Either option underpins the use of important resources (oil, land, timber), the emission of greenhouse gases and additional waste generation in most supply chains. This study uses a dynamic approach to the life cycle assessment (LCA) of pallets. It demonstrates that what ultimately defines the environmental burden of pallets in the supply chain is how often the length of its lifespan, which depends on the durability of the material and on how pallets are utilized. This study proposes a life cycle assessment (LCA) of pallets in supply chains supported by an algorithm that estimates pallet durability in function of material resilience and of logistics. The LCA runs from cradle-to-grave, including raw material provision, manufacture, transport and end of life. The scope is representative of timber and plastic pallets in the Australian and South-East Asia markets. The materials included in this analysis are: -tropical mixed hardwood, unsustainably harvested in SE Asia; -certified softwood, sustainably harvested; -conventional plastic, a mix of virgin and scrap plastic; -recycled plastic pallets, 100% mixed plastic scrap, which are being pioneered by Re > Pal. The logistical model purports that more complex supply chains and rougher handling subject pallets to higher stress loads. More stress shortens the lifespan of pallets in function of their composition. Timber pallets can be repaired, extending their lifespan, while plastic pallets cannot. At the factory gate, softwood pallets have the lowest carbon footprint. Re > pal follows closely due to its burden-free feedstock. Tropical mixed hardwood and plastic pallets have the highest footprints. Harvesting tropical mixed hardwood in SE Asia often leads to deforestation, leading to emissions from land use change. The higher footprint of plastic pallets is due to the production of virgin plastic. Our findings show that manufacture alone does not determine the sustainability of pallets. Even though certified softwood pallets have lower carbon footprint and their lifespan can be extended by repair, the need for re-supply of materials and disposal of waste timber offsets this advantage. It also leads to most waste being generated among all pallets. In a supply chain context, Re > Pal pallets have the lowest footprint due to lower replacement and disposal needs. In addition, Re > Pal are nearly ‘waste neutral’, because the waste that is generated throughout their life cycle is almost totally offset by the scrap uptake for production. The absolute results of this study can be confirmed by progressing the logistics model, improving data quality, expanding the range of materials and utilization practices. Still, this LCA demonstrates that considering logistics, raw materials and material durability is central for sustainable decision-making on pallet purchasing, management and disposal.Keywords: carbon footprint, life cycle assessment, recycled plastic, waste
Procedia PDF Downloads 225197 Interplay of Material and Cycle Design in a Vacuum-Temperature Swing Adsorption Process for Biogas Upgrading
Authors: Federico Capra, Emanuele Martelli, Matteo Gazzani, Marco Mazzotti, Maurizio Notaro
Abstract:
Natural gas is a major energy source in the current global economy, contributing to roughly 21% of the total primary energy consumption. Production of natural gas starting from renewable energy sources is key to limit the related CO2 emissions, especially for those sectors that heavily rely on natural gas use. In this context, biomethane produced via biogas upgrading represents a good candidate for partial substitution of fossil natural gas. The upgrading process of biogas to biomethane consists in (i) the removal of pollutants and impurities (e.g. H2S, siloxanes, ammonia, water), and (ii) the separation of carbon dioxide from methane. Focusing on the CO2 removal process, several technologies can be considered: chemical or physical absorption with solvents (e.g. water, amines), membranes, adsorption-based systems (PSA). However, none emerged as the leading technology, because of (i) the heterogeneity in plant size, ii) the heterogeneity in biogas composition, which is strongly related to the feedstock type (animal manure, sewage treatment, landfill products), (iii) the case-sensitive optimal tradeoff between purity and recovery of biomethane, and iv) the destination of the produced biomethane (grid injection, CHP applications, transportation sector). With this contribution, we explore the use of a technology for biogas upgrading and we compare the resulting performance with benchmark technologies. The proposed technology makes use of a chemical sorbent, which is engineered by RSE and consists of Di-Ethanol-Amine deposited on a solid support made of γ-Alumina, to chemically adsorb the CO2 contained in the gas. The material is packed into fixed beds that cyclically undergo adsorption and regeneration steps. CO2 is adsorbed at low temperature and ambient pressure (or slightly above) while the regeneration is carried out by pulling vacuum and increasing the temperature of the bed (vacuum-temperature swing adsorption - VTSA). Dynamic adsorption tests were performed by RSE and were used to tune the mathematical model of the process, including material and transport parameters (i.e. Langmuir isotherms data and heat and mass transport). Based on this set of data, an optimal VTSA cycle was designed. The results enabled a better understanding of the interplay between material and cycle tuning. As exemplary application, the upgrading of biogas for grid injection, produced by an anaerobic digester (60-70% CO2, 30-40% CH4), for an equivalent size of 1 MWel was selected. A plant configuration is proposed to maximize heat recovery and minimize the energy consumption of the process. The resulting performances are very promising compared to benchmark solutions, which make the VTSA configuration a valuable alternative for biomethane production starting from biogas.Keywords: biogas upgrading, biogas upgrading energetic cost, CO2 adsorption, VTSA process modelling
Procedia PDF Downloads 279196 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases
Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus
Abstract:
Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.Keywords: cellulases, LaeA/1, proteomics, secondary metabolites
Procedia PDF Downloads 272195 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival
Authors: Ichiro Takahashi
Abstract:
One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability
Procedia PDF Downloads 163194 The Politics of Identity: A Longitudinal Study of the Sociopolitical Development of Education Leaders
Authors: Shelley Zion
Abstract:
This study examines the longitudinal impact (10 years) of a course for education leaders designed to encourage the development of critical consciousness surrounding issues of equity, oppression, power, and privilege. The ability to resist and challenge oppression across social and cultural contexts can be acquired through the use of transformative pedagogies that create spaces that use the practice of exploration to make connections between pervasive structural and institutional practices and race and ethnicity. This study seeks to extend this understanding by exploring the longitudinal influence of participating in a course that utilizes transformative pedagogies, course materials, exercises, and activities to encourage the practice of exploration of student experiences with racial and ethnic discrimination with the end goal of providing them with the necessary knowledge and skills that foster their ability to resist and challenge oppression and discrimination -critical action- in their lives. To this end, we use the explanatory power of the theories of critical consciousness development, sociopolitical development, and social identity construction that view exploration as a crucial practice in understanding the role ethnic and racial differences play in creating opportunities or barriers in the lives of individuals. When educators use transformative pedagogies, they create a space where students collectively explore their experiences with racial and ethnic discrimination through course readings, in-class activities, and discussions. The end goal of this exploration is twofold: first, to encourage the student’s ability to understand how differences are identified, given meaning to, and used to position them in specific places and spaces in their world; second, to scaffold students’ ability to make connections between their individual and collective differences and particular institutional and structural practices that create opportunities or barriers in their lives. Studies have found the formal exploration of students’ individual and collective differences in relation to their experiences with racial and ethnic discrimination results in developing an understanding of the roles race and ethnicity play in their lives. To trace the role played by exploration in identity construction, we utilize an integrative approach to identity construction informed by multiple theoretical frameworks grounded in cultural studies, social psychology, and sociology that understand social-cultural, racial, and ethnic -identities as dynamic and ever-changing based on context-specific environments. Stuart Hall refers to this practice as taking “symbolic detours through the past” while reflecting on the different ways individuals have been positioned based on their roots (group membership) and also how they, in turn, chose to position themselves through collective sense-making of the various meanings their differences carried through the routes they have taken. The practice of exploration in the construction of ethnic-racial identities has been found to be beneficial to sociopolitical development.Keywords: political polarization, civic participation, democracy, education
Procedia PDF Downloads 56193 Structural and Microstructural Analysis of White Etching Layer Formation by Electrical Arcing Induced on the Surface of Rail Track
Authors: Ali Ahmed Ali Al-Juboori, H. Zhu, D. Wexler, H. Li, C. Lu, J. McLeod, S. Pannila, J. Barnes
Abstract:
A number of studies have focused on the formation mechanics of white etching layer and its origin in the railway operation. Until recently, the following hypotheses consider the precise mechanics of WELs formation: (i) WELs are the result of thermal process caused by wheel slip; (ii) WELs are mechanically induced by severe plastic deformation; (iii) WELs are caused by a combination of thermo-mechanical process. The mechanisms discussed above lead to occurrence of white etching layers on the area of wheel and rail contact. This is because the contact patch which is the active point of the wheel on the rail is exposed to highest shear stresses which result in localised severe plastic deformation; and highest rate of heat caused by wheel slipe during excessive traction or braking effort. However, if the WELs are not on the running band area, it would suggest that there is another cause of WELs formation. In railway system, particularly electrified railway, arcing phenomenon has been occurring more often and regularly on the rails. In electrified railway, the current is delivered to the train traction motor via contact wires and then returned to the station via the contact between the wheel and the rail. If the contact between the wheel and the rail is temporarily losing, due to dynamic vibration, entrapped dirt or water, lubricant effect or oxidation occurrences, high current can jump through the gap and results in arcing. The other resources of arcing also include the wheel passage the insulated joint and lightning on a train during bad weather. During the arcing, an extensive heat is generated and speared over a large area of top surface of rail. Thus, arcing is considered another heat source in the rail head (rather than wheel slipe) that results in microstructural changes and white etching layer formation. A head hardened (HH) rail steel, cut from a curved rail truck was used for the investigation. Samples were sectioned from a depth of 10 mm below the rail surface, where the material is considered to be still within the hardened layer but away from any microstructural changes on the top surface layer caused by train passage. These samples were subjected to electrical discharges by using Gas Tungsten Arc Welding (GTAW) machine. The arc current was controlled and moved along the samples surface in the direction of travel, as indicated by an arrow. Five different conditions were applied on the surface of the samples. Samples containing pre-existed WELs, taken from ex-service rail surface, were also considered in this study for comparison. Both simulated and ex-serviced WELs were characterised by advanced methods including SEM, TEM, TKD, EDS, XRD. Samples for TEM and TKFD were prepared by Focused Ion Beam (FIB) milling. The results showed that both simulated WELs by electrical arcing and ex-service WEL comprise similar microstructure. Brown etching layer was found with WELs and likely induced by a concurrent tempering process. This study provided a clear understanding of new formation mechanics of WELs which contributes to track maintenance procedure.Keywords: white etching layer, arcing, brown etching layer, material characterisation
Procedia PDF Downloads 122192 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth
Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.Keywords: treeline, dynamic, climate, modeling
Procedia PDF Downloads 86191 Analysis of Electric Mobility in the European Union: Forecasting 2035
Authors: Domenico Carmelo Mongelli
Abstract:
The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.Keywords: engines, Europe, mobility, transition
Procedia PDF Downloads 63190 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability
Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki
Abstract:
Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach
Procedia PDF Downloads 134189 Impact of Climate Change on Irrigation and Hydropower Potential: A Case of Upper Blue Nile Basin in Western Ethiopia
Authors: Elias Jemal Abdella
Abstract:
The Blue Nile River is an important shared resource of Ethiopia, Sudan and also, because it is the major contributor of water to the main Nile River, Egypt. Despite the potential benefits of regional cooperation and integrated joint basin management, all three countries continue to pursue unilateral plans for development. Besides, there is great uncertainty about the likely impacts of climate change in water availability for existing as well as proposed irrigation and hydropower projects in the Blue Nile Basin. The main objective of this study is to quantitatively assess the impact of climate change on the hydrological regime of the upper Blue Nile basin, western Ethiopia. Three models were combined, a dynamic Coordinated Regional Climate Downscaling Experiment (CORDEX) regional climate model (RCM) that is used to determine climate projections for the Upper Blue Nile basin for Representative Concentration Pathways (RCPs) 4.5 and 8.5 greenhouse gas emissions scenarios for the period 2021-2050. The outputs generated from multimodel ensemble of four (4) CORDEX-RCMs (i.e., rainfall and temperature) were used as input to a Soil and Water Assessment Tool (SWAT) hydrological model which was setup, calibrated and validated with observed climate and hydrological data. The outputs from the SWAT model (i.e., projections in river flow) were used as input to a Water Evaluation and Planning (WEAP) water resources model which was used to determine the water resources implications of the changes in climate. The WEAP model was set-up to simulate three development scenarios. Current Development scenario was the existing water resource development situation, Medium-term Development scenario was planned water resource development that is expected to be commissioned (i.e. before 2025) and Long-term full Development scenario were all planned water resource development likely to be commissioned (i.e. before 2050). The projected change of mean annual temperature for period (2021 – 2050) in most of the basin are warmer than the baseline (1982 -2005) average in the range of 1 to 1.4oC, implying that an increase in evapotranspiration loss. Subbasins which already distressed from drought may endure to face even greater challenges in the future. Projected mean annual precipitation varies from subbasin to subbasin; in the Eastern, North Eastern and South western highland of the basin a likely increase of mean annual precipitation up to 7% whereas in the western lowland part of the basin mean annual precipitation projected to decrease by 3%. The water use simulation indicates that currently irrigation demand in the basin is 1.29 Bm3y-1 for 122,765 ha of irrigation area. By 2025, with new schemes being developed, irrigation demand is estimated to increase to 2.5 Bm3y-1 for 277,779 ha. By 2050, irrigation demand in the basin is estimated to increase to 3.4 Bm3y-1 for 372,779 ha. The hydropower generation simulation indicates that 98 % of hydroelectricity potential could be produced if all planned dams are constructed.Keywords: Blue Nile River, climate change, hydropower, SWAT, WEAP
Procedia PDF Downloads 355188 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 161187 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 117186 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs
Authors: Ignitia Motjolopane
Abstract:
Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.Keywords: business models, innovation, generative AI, small medium enterprises
Procedia PDF Downloads 72185 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)
Authors: Aliya K. Salahova
Abstract:
Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study
Procedia PDF Downloads 62184 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD
Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer
Abstract:
Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film
Procedia PDF Downloads 295183 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 123182 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System
Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold
Abstract:
In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber
Procedia PDF Downloads 148181 Offshore Facilities Load Out: Case Study of Jacket Superstructure Loadout by Strand Jacking Skidding Method
Authors: A. Rahim Baharudin, Nor Arinee binti Mat Saaud, Muhammad Afiq Azman, Farah Adiba A. Sani
Abstract:
Objectives: This paper shares the case study on the engineering analysis, data analysis, and real-time data comparison for qualifying the stand wires' minimum breaking load and safe working load upon loadout operation for a new project and, at the same time, eliminate the risk due to discrepancies and unalignment of COMPANY Technical Standards to Industry Standards and Practices. This paper demonstrates “Lean Construction” for COMPANY’s Project by sustaining fit-for-purpose Technical Requirements of Loadout Strand Wire Factor of Safety (F.S). The case study utilizes historical engineering data from a few loadout operations by skidding methods from different projects. It is also demonstrating and qualifying the skidding wires' minimum breaking load and safe working load used for loadout operation for substructure and other facilities for the future. Methods: Engineering analysis and comparison of data were taken as referred to the international standard and internal COMPANY standard requirements. Data was taken from nine (9) previous projects for both topsides and jacket facilities executed at the several local fabrication yards where load out was conducted by three (3) different service providers with emphasis on four (4) basic elements: i) Industry Standards for Loadout Engineering and Operation Reference: COMPANY internal standard was referred to superseded documents of DNV-OS-H201 and DNV/GL 0013/ND. DNV/GL 0013/ND and DNVGL-ST-N001 do not mention any requirements of Strand Wire F.S of 4.0 for Skidding / Pulling Operations. ii) Reference to past Loadout Engineering and Execution Package: Reference was made to projects delivered by three (3) major offshore facilities operators. Strand Wire F.S observed ranges from 2.0 MBL (Min) to 2.5 MBL (Max). No Loadout Operation using the requirements of 4.0 MBL was sighted from the reference. iii) Strand Jack Equipment Manufacturer Datasheet Reference: Referring to Strand Jack Equipment Manufactured Datasheet by different loadout service providers, it is shown that the Designed F.S for the equipment is also ranging between 2.0 ~ 2.5. Eight (8) Strand Jack Datasheet Model was referred to, ranging from 15 Mt to 850 Mt Capacity; however, there are NO observations of designed F.S 4.0 sighted. iv) Site Monitoring on Actual Loadout Data and Parameter: Max Load on Strand Wire was captured during 2nd Breakout, which is during Static Condition of 12.9 MT / Strand Wire (67.9% Utilization). Max Load on Strand Wire for Dynamic Conditions during Step 8 and Step 12 is 9.4 Mt / Strand Wire (49.5% Utilization). Conclusion: This analysis and study demonstrated the adequacy of strand wires supplied by the service provider were technically sufficient in terms of strength, and via engineering analysis conducted, the minimum breaking load and safe working load utilized and calculated for the projects were satisfied and operated safely for the projects. It is recommended from this study that COMPANY’s technical requirements are to be revised for future projects’ utilization.Keywords: construction, load out, minimum breaking load, safe working load, strand jacking, skidding
Procedia PDF Downloads 117