Search results for: optimal transverse shape
4273 Proinflammatory Response of Agglomerated TiO2 Nanoparticles in Human-Immune Cells
Authors: Vaiyapuri Subbarayn Periasamy, Jegan Athinarayanan, Ali A. Alshatwi
Abstract:
The widespread use of Titanium oxide nanoparticles (TiO2-NPs), now are found with different physic-chemical properties (size, shape, chemical properties, agglomeration, etc.) in many processed foods, agricultural chemicals, biomedical products, food packaging and food contact materials, personal care products, and other consumer products used in daily life. Growing evidences have been highlighted that there are risks of physico-chemical properties dependent toxicity with special attention to “TiO2-NPs and human immune system”. Unfortunately, agglomeration and aggregation have frequently been ignored in immuno-toxicological studies, even though agglomeration and aggregation would be expected to affect nanotoxicity since it changes the size, shape, surface area, and other properties of the TiO2-NPs. In this present investigation, we assessed the immune toxic effect of TiO2-NPs on human immune cells Total WBC including Lymphocytes (T cells (CD3+), T helper cells (CD3+, CD4+), Suppressor/cytotoxic T cells (CD3+/CD8+) and NK cells (CD3-/CD16+ and CD56+), Monocytes (CD14+, CD3-) and B lymphocytes (CD19+, CD3-) in order to find the immunological response (IL1A, IL1B, IL2 IL-4, IL5 IL-6, IL-10, IL-12, IL-13, IFN-γ, TGF-β, and TNF-a) and redox gene regulation (TNF, p53, BCl-2, CAT, GSTA4, TNF, CYP1A, POR, SOD1, GSTM3, GPX1, and GSR1)-linking physicochemical properties with special reference to agglomeration of TiO2-NPs. Our findings suggest that TiO2-NPs altered cytokine production, enhanced phagocytic indexing, metabolic stress through specific immune regulatory- genes expression in different WBC subsets and may contribute to pro-inflammatory response. Although TiO2-NPs have great advantages in the personal care products, biomedical, food and agricultural products, its chronic and acute immune-toxicity still need to be assessed carefully with special reference to food and environmental safety.Keywords: TiO2 nanoparticles, oxidative stress, cytokine, human immune cells
Procedia PDF Downloads 3974272 An Exploration of Health Promotion Approach to Increase Optimal Complementary Feeding among Pastoral Mothers Having Children between 6 and 23 Months in Dikhil, Djibouti
Authors: Haruka Ando
Abstract:
Undernutrition of children is a critical issue, especially for people in the remote areas of the Republic of Djibouti, since household food insecurity, inadequate child caring and feeding, unhealthy environment and lack of clean water, as well as insufficient maternal and child healthcare, are underlying causes which affect. Nomadic pastoralists living in the Dikhil region (Dikhil) are socio-economically and geographically more vulnerable due to displacement, which in turn worsens the situation of child stunting. A high prevalence of inappropriate complementary feeding among pastoral mothers might be a significant barrier to child growth. This study aims to identify health promotion intervention strategies that would support an increase in optimal complementary feeding among pastoral mothers of children aged 6-23 months in Dikhil. There are four objectives; to explore and to understand the existing practice of complementary feeding among pastoral mothers in Dikhil; to identify the barriers in appropriate complementary feeding among the mothers; to critically explore and analyse the strategies for an increase in complementary feeding among the mothers; to make pragmatic recommendations to address the barriers in Djibouti. This is an in-depth study utilizing a conceptual framework, the behaviour change wheel, to analyse the determinants of complementary feeding and categorize health promotion interventions for increasing optimal complementary feeding among pastoral mothers living in Dikhil. The analytical tool was utilized to appraise the strategies to mitigate the selected barriers against optimal complementary feeding. The data sources were secondary literature from both published and unpublished sources. The literature was systematically collected. The findings of the determinants including the barriers of optimal complementary feeding were identified: heavy household workload, caring for multiple children under five, lack of education, cultural norms and traditional eating habits, lack of husbands' support, poverty and food insecurity, lack of clean water, low media coverage, insufficient health services on complementary feeding, fear, poor personal hygiene, and mothers' low decision-making ability and lack of motivation for food choice. To mitigate selected barriers of optimal complementary feeding, four intervention strategies based on interpersonal communication at the community-level were chosen: scaling up mothers' support groups, nutrition education, grandmother-inclusive approach, and training for complementary feeding counseling. The strategies were appraised through the criteria of effectiveness and feasibility. Scaling up mothers' support groups could be the best approach. Mid-term and long-term recommendations are suggested based on the situation analysis and appraisal of intervention strategies. Mid-term recommendations include complementary feeding promotion interventions are integrated into the healthcare service providing system in Dikhil, and donor agencies advocate and lobby the Ministry of Health Djibouti (MoHD) to increase budgetary allocation on complementary feeding promotion to implement interventions at a community level. Moreover, the recommendations include a community health management team in Dikhil training healthcare workers and mother support groups by using complementary feeding communication guidelines and monitors behaviour change of pastoral mothers and health outcome of their children. Long-term recommendations are the MoHD develops complementary feeding guidelines to cover sector-wide collaboration for multi-sectoral related barriers.Keywords: Afar, child food, child nutrition, complementary feeding, complementary food, developing countries, Djibouti, East Africa, hard-to-reach areas, Horn of Africa, nomad, pastoral, rural area, Somali, Sub-Saharan Africa
Procedia PDF Downloads 1254271 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Authors: Yao Cheng, Weihua Zhang
Abstract:
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution
Procedia PDF Downloads 3744270 Optimization of Hepatitis B Surface Antigen Purifications to Improving the Production of Hepatitis B Vaccines on Pichia pastoris
Authors: Rizky Kusuma Cahyani
Abstract:
Hepatitis B is a liver inflammatory disease caused by hepatitis B virus (HBV). This infection can be prevented by vaccination which contains HBV surface protein (sHBsAg). However, vaccine supply is limited. Several attempts have been conducted to produce local sHBsAg. However, the purity degree and protein yield are still inadequate. Therefore optimization of HBsAg purification steps is required to obtain high yield with better purification fold. In this study, optimization of purification was done in 2 steps, precipitation using variation of NaCl concentration (0,3 M; 0,5 M; 0,7 M) and PEG (3%, 5%, 7%); ion exchange chromatography (IEC) using NaCl 300-500 mM elution buffer concentration.To determine HBsAg protein, bicinchoninic acid assay (BCA) and enzyme-linked immunosorbent assay (ELISA) was used in this study. Visualization of HBsAg protein was done by SDS-PAGE analysis. Based on quantitative analysis, optimal condition at precipitation step was given 0,3 M NaCl and PEG 3%, while in ion exchange chromatography step, the optimum condition when protein eluted with NaCl 500 mM. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis indicates that the presence of protein HBsAg with a molecular weight of 25 kDa (monomer) and 50 kDa (dimer). The optimum condition for purification of sHBsAg produced in Pichia pastoris gave a yield of 47% and purification fold 17x so that it would increase the production of hepatitis B vaccine to be more optimal.Keywords: hepatitis B virus, HBsAg, hepatitis B surface antigen, Pichia pastoris, purification
Procedia PDF Downloads 1514269 The Failure and Energy Mechanism of Rock-Like Material with Single Flaw
Authors: Yu Chen
Abstract:
This paper investigates the influence of flaw on failure process of rock-like material under uniaxial compression. In laboratory, the uniaxial compression tests of intact specimens and a series of specimens within single flaw were conducted. The inclination angle of flaws includes 0°, 15°, 30°, 45°, 60°, 75° and 90°. Based on the laboratory tests, the corresponding models of numerical simulation were built and loaded in PFC2D. After analysing the crack initiation and failure modes, deformation field, and energy mechanism for both laboratory tests and numerical simulation, it can be concluded that the influence of flaws on the failure process is determined by its inclination. The characteristic stresses increase as flaw angle rising basically. The tensile cracks develop from gentle flaws (α ≤ 30°) and the shear cracks develop from other flaws. The propagation of cracks changes during failure process and the failure mode of a specimen corresponds to the orientation of the flaw. A flaw has significant influence on the transverse deformation field at the middle of the specimen, except the 75° and 90° flaw sample. The input energy, strain energy and dissipation energy of specimens show approximate increase trends with flaw angle rising and it presents large difference on the energy distribution.Keywords: failure pattern, particle deformation field, energy mechanism, PFC
Procedia PDF Downloads 2134268 Load-Enabled Deployment and Sensing Range Optimization for Lifetime Enhancement of WSNs
Authors: Krishan P. Sharma, T. P. Sharma
Abstract:
Wireless sensor nodes are resource constrained battery powered devices usually deployed in hostile and ill-disposed areas to cooperatively monitor physical or environmental conditions. Due to their limited power supply, the major challenge for researchers is to utilize their battery power for enhancing the lifetime of whole network. Communication and sensing are two major sources of energy consumption in sensor networks. In this paper, we propose a deployment strategy for enhancing the average lifetime of a sensor network by effectively utilizing communication and sensing energy to provide full coverage. The proposed scheme is based on the fact that due to heavy relaying load, sensor nodes near to the sink drain energy at much faster rate than other nodes in the network and consequently die much earlier. To cover this imbalance, proposed scheme finds optimal communication and sensing ranges according to effective load at each node and uses a non-uniform deployment strategy where there is a comparatively high density of nodes near to the sink. Probable relaying load factor at particular node is calculated and accordingly optimal communication distance and sensing range for each sensor node is adjusted. Thus, sensor nodes are placed at locations that optimize energy during network operation. Formal mathematical analysis for calculating optimized locations is reported in present work.Keywords: load factor, network lifetime, non-uniform deployment, sensing range
Procedia PDF Downloads 3834267 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage
Authors: M. Vezir Kahraman, Emre Basturk
Abstract:
Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing
Procedia PDF Downloads 2284266 Determination of Unsaturated Soil Permeability Based on Geometric Factor Development of Constant Discharge Model
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
After Yogyakarta earthquake in 2006, the main problem that occurred in the first yard of Prambanan Temple is ponding area that occurred after rainfall. Soil characterization needs to be determined by conducting several processes, especially permeability coefficient (k) in both saturated and unsaturated conditions to solve this problem. More accurate and efficient field testing procedure is required to obtain permeability data that present the field condition. One of the field permeability test equipment is Constant Discharge procedure to determine the permeability coefficient. Necessary adjustments of the Constant Discharge procedure are needed to be determined especially the value of geometric factor (F) to improve the corresponding value of permeability coefficient. The value of k will be correlated with the value of volumetric water content (θ) of an unsaturated condition until saturated condition. The principle procedure of Constant Discharge model provides a constant flow in permeameter tube that flows into the ground until the water level in the tube becomes constant. Constant water level in the tube is highly dependent on the tube dimension. Every tube dimension has a shape factor called the geometric factor that affects the result of the test. Geometric factor value is defined as the characteristic of shape and radius of the tube. This research has modified the geometric factor parameters by using empty material tube method so that the geometric factor will change. Saturation level is monitored by using soil moisture sensor. The field test results were compared with the results of laboratory tests to validate the results of the test. Field and laboratory test results of empty tube material method have an average difference of 3.33 x 10-4 cm/sec. The test results showed that modified geometric factor provides more accurate data. The improved methods of constant discharge procedure provide more relevant results.Keywords: constant discharge, geometric factor, permeability coefficient, unsaturated soils
Procedia PDF Downloads 2944265 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 2404264 Characterising the Performance Benefits of a 1/7-Scale Morphing Rotor Blade
Authors: Mars Burke, Alvin Gatto
Abstract:
Rotary-wing aircraft serve as indispensable components in the advancement of aviation, valued for their ability to operate in diverse and challenging environments without the need for conventional runways. This versatility makes them ideal for applications like environmental conservation, precision agriculture, emergency medical support, and rapid-response operations in rugged terrains. However, although highly maneuverable, rotary-wing platforms generally have lower aerodynamic efficiency than fixed-wing aircraft. This study takes the view of improving aerodynamic performance by examining a 1/7th scale rotor blade model with a NACA0012 airfoil using CROTOR software. The analysis focuses on optimal spanwise locations for separating morphing and fixed blade sections at 85%, 90%, and 95% of the blade radius (r/R) with up to +20 degrees of twist incorporated to the design.. Key performance metrics assessed include lift coefficient (CL), drag coefficient (CD), lift-to-drag ratio (CL / CD), Mach number, power, thrust coefficient, and Figure of Merit (FOM). Results indicate that the 0.90 r/R position is optimal for dividing the morphing and fixed sections, achieving a significant improvement of over 7% in both lift-to-drag ratio and FOM. These findings underscoring the substantial impact on overall performance of the rotor system and rotational aerodynamics that geometric modifications through the inclusion of a morphing capability can ultimately realise.Keywords: rotary morphing, rotational aerodynamics, rotorcraft morphing, rotor blade, twist morphing
Procedia PDF Downloads 124263 A Family Development Approach to Understanding the Transfer of Family Business Ownership
Authors: Susan Lanz, Gary T. Burke, Omid Omidvar
Abstract:
The intention to transfer ownership control across family generations is acknowledged to be central to developing a theoretical understanding of how family businesses differ and are distinct as a business group. However, in practice, most business-owning families face challenges to transfer their business ownership from one family generation to the next. To date, researchers have paid little attention to how and when ownership is passed across family generations and what the dynamics of such transitions are. This is primarily due to the prevailing assumption that ownership transfer is an unimportant and legalistic issue that occurs within a wider family management succession process. Yet, the limited evidence available suggests that family ownership transfer occurs inside and outside of the management succession process and is a difficult process for business-owning families to navigate. As a result, many otherwise viable family businesses are closing, leading to unnecessary loss of jobs and knowledge. This qualitative paper examines how family members understand and navigate the ownership transfer process. This study uses an inductive qualitative research design, conducted through in-depth interviews within eight business-owning families. It draws on family development theory and shows how a wide range of family-related events and dynamics outside of family business involvement underlie and shape the ownership transfer process. The findings extend the theory on how these events trigger ownership transfer and how they shape the ownership meanings held within business-owning families. This study found that ownership transfer meanings extend beyond that of transferring the legal control and financial appropriation rights of shareholders. The study concludes there are three different stages in the process of ownership transfer -symbolic, re-balancing, and protectionist. Each stage creates distinct family social constructions of the rights of family members to hold business ownership, and each stage occurs within a specific family development phase.Keywords: business-owning family, family development theory, ownership transfer, process
Procedia PDF Downloads 1544262 Distribution and Diversity of Pyrenocarpous Lichens in India with Special Reference to Forest Health
Authors: Gaurav Kumar Mishra, Sanjeeva Nayaka, Dalip Kumar Upreti
Abstract:
Our nature exhibited presence of a number of unique plants which can be used as indicator of environmental condition of particular place. Lichens are unique plant which has an ability to absorb not only organic, inorganic and metaloties but also absorb radioactive nuclide substances present in the environment. In the present study pyrenocarpous lichens will used as indicator of good forest health in a particular place. The Pyrenocarpous lichens are simple crust forming with black dot like perithecia have few characters for their taxonomical segregation as compared to their foliose and fruticose brethrean. The thallus colour and nature, presence and absence of hypothallus are only few characters of thallus are used to segregate the pyrenocarpous taxa. The fruiting bodies of pyrenolichens i.e. ascocarps are perithecia. The perithecia and the contents found within them posses many important criteria for the segregation of pyrenocarpous lichen taxa. The ascocarp morphology, ascocarp arrangement, the perithecial wall, ascocarp shape and colour, ostiole shape and position, ostiole colour, ascocarp anatomy including type of paraphyses, asci shape and size, ascospores septation, ascospores wall and periphyses are the valuable charcters used for segregation of different pyrenocarpous lichen taxa. India is represented by the occurrence of the 350 species of 44 genera and eleven families. Among the different genera Pyrenula is dominant with 82 species followed by the Porina with 70 species. Recently, systematic of the pyrenocarpous lichens have been revised by American and European lichenologists using phylogenetic methods. Still the taxonomy of pyrenocarpous lichens is in flux and information generated after the completion of this study will play vital role in settlement of the taxonomy of this peculiar group of lichens worldwide. The Indian Himalayan region exhibit rich diversity of pyrenocarpous lichens in India. The western Himalayan region has luxuriance of pyrenocarpous lichens due to its unique topography and climate condition. However, the eastern Himalayan region has rich diversity of pyrenocarpous lichens due to its warmer and moist climate condition. The rich moist and warmer climate in eastern Himalayan region supports forest with dominance of evergreen tree vegetation. The pyrenocarpous lichens communities are good indicator of young and regenerated forest type. The rich diversity of lichens clearly indicates that moist of the forest within the eastern Himalayan region has good health of forest. Due to fast pace of urbanization and other developmental activities will defiantly have adverse effects on the diversity and distribution of pyrenocarpous lichens in different forest type and the present distribution pattern will act as baseline data for carried out future biomonitoring studies in the area.Keywords: lichen diversity, indicator species, environmental factors, pyrenocarpous
Procedia PDF Downloads 1474261 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1354260 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 2854259 Mesoscopic Defects of Forming and Induced Properties on the Impact of a Composite Glass/Polyester
Authors: Bachir Kacimi, Fatiha Teklal, Arezki Djebbar
Abstract:
Forming processes induce residual deformations on the reinforcement and sometimes lead to mesoscopic defects, which are more recurrent than macroscopic defects during the manufacture of complex structural parts. This study deals with the influence of the fabric shear and buckles defects, which appear during draping processes of composite, on the impact behavior of a glass fiber reinforced polymer. To achieve this aim, we produced several specimens with different amplitude of deformations (shear) and defects on the fabric using a specific bench. The specimens were manufactured using the contact molding and tested with several impact energies. The results and measurements made on tested specimens were compared to those of the healthy material. The results showed that the buckle defects have a negative effect on elastic parameters and revealed a larger damage with significant out-of-plane mode relatively to the healthy composite material. This effect is the consequence of a local fiber impoverishment and a disorganization of the fibrous network, with a reorientation of the fibers following the out-of-plane buckling of the yarns, in the area where the defects are located. For the material with calibrated shear of the reinforcement, the increased local fiber rate due to the shear deformations and the contribution to stiffness of the transverse yarns led to an increase in mechanical properties.Keywords: Defects, Forming, Impact, Induced properties, Textiles
Procedia PDF Downloads 1404258 Optimal 3D Deployment and Path Planning of Multiple Uavs for Maximum Coverage and Autonomy
Authors: Indu Chandran, Shubham Sharma, Rohan Mehta, Vipin Kizheppatt
Abstract:
Unmanned aerial vehicles are increasingly being explored as the most promising solution to disaster monitoring, assessment, and recovery. Current relief operations heavily rely on intelligent robot swarms to capture the damage caused, provide timely rescue, and create road maps for the victims. To perform these time-critical missions, efficient path planning that ensures quick coverage of the area is vital. This study aims to develop a technically balanced approach to provide maximum coverage of the affected area in a minimum time using the optimal number of UAVs. A coverage trajectory is designed through area decomposition and task assignment. To perform efficient and autonomous coverage mission, solution to a TSP-based optimization problem using meta-heuristic approaches is designed to allocate waypoints to the UAVs of different flight capacities. The study exploits multi-agent simulations like PX4-SITL and QGroundcontrol through the ROS framework and visualizes the dynamics of UAV deployment to different search paths in a 3D Gazebo environment. Through detailed theoretical analysis and simulation tests, we illustrate the optimality and efficiency of the proposed methodologies.Keywords: area coverage, coverage path planning, heuristic algorithm, mission monitoring, optimization, task assignment, unmanned aerial vehicles
Procedia PDF Downloads 2154257 Evaluation of the Power Generation Effect Obtained by Inserting a Piezoelectric Sheet in the Backlash Clearance of a Circular Arc Helical Gear
Authors: Barenten Suciu, Yuya Nakamoto
Abstract:
Power generation effect, obtained by inserting a piezo- electric sheet in the backlash clearance of a circular arc helical gear, is evaluated. Such type of screw gear is preferred since, in comparison with the involute tooth profile, the circular arc profile leads to reduced stress-concentration effects, and improved life of the piezoelectric film. Firstly, geometry of the circular arc helical gear, and properties of the piezoelectric sheet are presented. Then, description of the test-rig, consisted of a right-hand thread gear meshing with a left-hand thread gear, and the voltage measurement procedure are given. After creating the tridimensional (3D) model of the meshing gears in SolidWorks, they are 3D-printed in acrylonitrile butadiene styrene (ABS) resin. Variation of the generated voltage versus time, during a meshing cycle of the circular arc helical gear, is measured for various values of the center distance. Then, the change of the maximal, minimal, and peak-to-peak voltage versus the center distance is illustrated. Optimal center distance of the gear, to achieve voltage maximization, is found and its significance is discussed. Such results prove that the contact pressure of the meshing gears can be measured, and also, the electrical power can be generated by employing the proposed technique.Keywords: circular arc helical gear, contact problem, optimal center distance, piezoelectric sheet, power generation
Procedia PDF Downloads 1674256 Study on Effect of Reverse Cyclic Loading on Fracture Resistance Curve of Equivalent Stress Gradient (ESG) Specimen
Authors: Jaegu Choi, Jae-Mean Koo, Chang-Sung Seok, Byungwoo Moon
Abstract:
Since massive earthquakes in the world have been reported recently, the safety of nuclear power plants for seismic loading has become a significant issue. Seismic loading is the reverse cyclic loading, consisting of repeated tensile and compression by longitudinal and transverse wave. Up to this time, the study on characteristics of fracture toughness under reverse cyclic loading has been unsatisfactory. Therefore, it is necessary to obtain the fracture toughness under reverse cyclic load for the integrity estimation of nuclear power plants under seismic load. Fracture resistance (J-R) curves, which are used for determination of fracture toughness or integrity estimation in terms of elastic-plastic fracture mechanics, can be derived by the fracture resistance test using single specimen technique. The objective of this paper is to study the effects of reverse cyclic loading on a fracture resistance curve of ESG specimen, having a similar stress gradient compared to the crack surface of the real pipe. For this, we carried out the fracture toughness test under the reverse cyclic loading, while changing incremental plastic displacement. Test results showed that the J-R curves were decreased with a decrease of the incremental plastic displacement.Keywords: reverse cyclic loading, j-r curve, ESG specimen, incremental plastic displacement
Procedia PDF Downloads 3884255 A Study on Improvement of Performance of Anti-Splash Device for Cargo Oil Tank Vent Pipe Using CFD Simulation and Artificial Neural Network
Authors: Min-Woo Kim, Ok-Kyun Na, Jun-Ho Byun, Jong-Hwan Park, Seung-Hwa Yang, Joon-Hong Park, Young-Chul Park
Abstract:
This study is focused on the comparative analysis and improvement to grasp the flow characteristic of the Anti-Splash Device located under the P/V Valve and new concept design models using the CFD analysis and Artificial Neural Network. The P/V valve located upper deck to solve the pressure rising and vacuum condition of inner tank of the liquid cargo ships occurred oil outflow accident by transverse and longitudinal sloshing force. Anti-Splash Device is fitted to improve and prevent this problem in the shipbuilding industry. But the oil outflow accidents are still reported by ship owners. Thus, four types of new design model are presented by study. Then, comparative analysis is conducted with new models and existing model. Mostly the key criterion of this problem is flux in the outlet of the Anti-Splash Device. Therefore, the flow and velocity are grasped by transient analysis. And then it decided optimum model and design parameters to develop model. Later, it needs to develop an Anti-Splash Device by Flow Test to get certification and verification using experiment equipment.Keywords: anti-splash device, P/V valve, sloshing, artificial neural network
Procedia PDF Downloads 5904254 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes
Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini
Abstract:
Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.Keywords: modelling, Monte Carlo simulations, probabilistic models, data clustering, reinforced concrete members, structural design
Procedia PDF Downloads 4724253 Hydrometallurgical Recovery of Cobalt, Nickel, Lithium, and Manganese from Spent Lithium-Ion Batteries
Authors: E. K. Hardwick, L. B. Siwela, J. G. Falconer, M. E. Mathibela, W. Rolfe
Abstract:
Lithium-ion battery (LiB) demand has increased with the advancement in technologies. The applications include electric vehicles, cell phones, laptops, and many more devices. Typical components of the cathodes include lithium, cobalt, nickel, and manganese. Recycling the spent LiBs is necessary to reduce the ecological footprint of their production and use and to have a secondary source of valuable metals. A hydrometallurgical method was investigated for the recovery of cobalt and nickel from LiB cathodes. The cathodes were leached using a chloride solution. Ion exchange was then used to recover the chloro-complexes of the metals. The aim of the research was to determine the efficiency of a chloride leach, as well as ion exchange operating capacities that can be achieved for LiB recycling, and to establish the optimal operating conditions (ideal pH, temperature, leachate and eluant, flowrate, and reagent concentrations) for the recovery of the cathode metals. It was found that the leaching of the cathodes could be hindered by the formation of refractory metal oxides of cathode components. A reducing agent was necessary to improve the leaching rate and efficiency. Leaching was achieved using various chloride-containing solutions. The chloro-complexes were absorbed by the ion exchange resin and eluted to produce concentrated cobalt, nickel, lithium, and manganese streams. Chromatographic separation of these elements was achieved. Further work is currently underway to determine the optimal operating conditions for the recovery by ion exchange.Keywords: cobalt, ion exchange, leachate formation, lithium-ion batteries, manganese, nickel
Procedia PDF Downloads 984252 Systems Approach on Thermal Analysis of an Automatic Transmission
Authors: Sinsze Koo, Benjin Luo, Matthew Henry
Abstract:
In order to increase the performance of an automatic transmission, the automatic transmission fluid is required to be warm up to an optimal operating temperature. In a conventional vehicle, cold starts result in friction loss occurring in the gear box and engine. The stop and go nature of city driving dramatically affect the warm-up of engine oil and automatic transmission fluid and delay the time frame needed to reach an optimal operating temperature. This temperature phenomenon impacts both engine and transmission performance but also increases fuel consumption and CO2 emission. The aim of this study is to develop know-how of the thermal behavior in order to identify thermal impacts and functional principles in automatic transmissions. Thermal behavior was studied using models and simulations, developed using GT-Suit, on a one-dimensional thermal and flow transport. A power train of a conventional vehicle was modeled in order to emphasis the thermal phenomena occurring in the various components and how they impact the automatic transmission performance. The simulation demonstrates the thermal model of a transmission fluid cooling system and its component parts in warm-up after a cold start. The result of these analyses will support the future designs of transmission systems and components in an attempt to obtain better fuel efficiency and transmission performance. Therefore, these thermal analyses could possibly identify ways that improve existing thermal management techniques with prioritization on fuel efficiency.Keywords: thermal management, automatic transmission, hybrid, and systematic approach
Procedia PDF Downloads 3774251 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 1414250 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 4194249 Experimental and CFD Simulation of the Jet Pump for Air Bubbles Formation
Authors: L. Grinis, N. Lubashevsky, Y. Ostrovski
Abstract:
A jet pump is a type of pump that accelerates the flow of a secondary fluid (driven fluid) by introducing a motive fluid with high velocity into a converging-diverging nozzle. Jet pumps are also known as adductors or ejectors depending on the motivator phase. The ejector's motivator is of a gaseous nature, usually steam or air, while the educator's motivator is a liquid, usually water. Jet pumps are devices that use air bubbles and are widely used in wastewater treatment processes. In this work, we will discuss about the characteristics of the jet pump and the computational simulation of this device. To find the optimal angle and depth for the air pipe, so as to achieve the maximal air volumetric flow rate, an experimental apparatus was constructed to ascertain the best geometrical configuration for this new type of jet pump. By using 3D printing technology, a series of jet pumps was printed and tested whilst aspiring to maximize air flow rate dependent on angle and depth of the air pipe insertion. The experimental results show a major difference of up to 300% in performance between the different pumps (ratio of air flow rate to supplied power) where the optimal geometric model has an insertion angle of 600 and air pipe insertion depth ending at the center of the mixing chamber. The differences between the pumps were further explained by using CFD for better understanding the reasons that affect the airflow rate. The validity of the computational simulation and the corresponding assumptions have been proved experimentally. The present research showed high degree of congruence with the results of the laboratory tests. This study demonstrates the potential of using of the jet pump in many practical applications.Keywords: air bubbles, CFD simulation, jet pump, applications
Procedia PDF Downloads 2434248 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites
Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner
Abstract:
Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites
Procedia PDF Downloads 584247 Fast Generation of High-Performance Driveshafts: A Digital Approach to Automated Linked Topology and Design Optimization
Authors: Willi Zschiebsch, Alrik Dargel, Sebastian Spitzer, Philipp Johst, Robert Böhm, Niels Modler
Abstract:
In this article, we investigate an approach that digitally links individual development process steps by using the drive shaft of an aircraft engine as a representative example of a fiber polymer composite. Such high-performance, lightweight composite structures have many adjustable parameters that influence the mechanical properties. Only a combination of optimal parameter values can lead to energy efficient lightweight structures. The development tools required for the Engineering Design Process (EDP) are often isolated solutions, and their compatibility with each other is limited. A digital framework is presented in this study, which allows individual specialised tools to be linked via the generated data in such a way that automated optimization across programs becomes possible. This is demonstrated using the example of linking geometry generation with numerical structural analysis. The proposed digital framework for automated design optimization demonstrates the feasibility of developing a complete digital approach to design optimization. The methodology shows promising potential for achieving optimal solutions in terms of mass, material utilization, eigenfrequency, and deformation under lateral load with less development effort. The development of such a framework is an important step towards promoting a more efficient design approach that can lead to stable and balanced results.Keywords: digital linked process, composite, CFRP, multi-objective, EDP, NSGA-2, NSGA-3, TPE
Procedia PDF Downloads 764246 Tandem Concentrated Photovoltaic-Thermoelectric Hybrid System: Feasibility Analysis and Performance Enhancement Through Material Assessment Methodology
Authors: Shuwen Hu, Yuancheng Lou, Dongxu Ji
Abstract:
Photovoltaic (PV) power generation, as one of the most commercialized methods to utilize solar power, can only convert a limited range of solar spectrum into electricity, whereas the majority of the solar energy is dissipated as heat. To address this problem, thermoelectric (TE) module is often integrated with the concentrated PV module for waste heat recovery and regeneration. In this research, a feasibility analysis is conducted for the tandem concentrated photovoltaic-thermoelectric (CPV-TE) hybrid system considering various operational parameters as well as TE material properties. Furthermore, the power output density of the CPV-TE hybrid system is maximized by selecting the optimal TE material with application of a systematic assessment methodology. In the feasibility analysis, CPV-TE is found to be more advantageous than sole CPV system except under high optical concentration ratio with low cold side convective coefficient. It is also shown that the effects of the TE material properties, including Seebeck coefficient, thermal conductivity, and electrical resistivity, on the feasibility of CPV-TE are interacted with each other and might have opposite effect on the system performance under different operational conditions. In addition, the optimal TE material selected by the proposed assessment methodology can improve the system power output density by 227 W/m2 under highly concentrated solar irradiance hence broaden the feasible range of CPV-TE considering optical concentration ratio.Keywords: feasibility analysis, material assessment methodology, photovoltaic waste heat recovery, tandem photovoltaic-thermoelectric
Procedia PDF Downloads 724245 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 1604244 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements
Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang
Abstract:
Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure
Procedia PDF Downloads 112