Search results for: incompressible flow simulation
5242 Vibration Analysis and Optimization Design of Ultrasonic Horn
Authors: Kuen Ming Shu, Ren Kai Ho
Abstract:
Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration
Procedia PDF Downloads 1175241 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm
Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio
Abstract:
The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.Keywords: algorithm, CoAP, DoS, IoT, machine learning
Procedia PDF Downloads 805240 The Effect of the Flow Pipe Diameter on the Rheological Behavior of a Polymeric Solution (CMC)
Authors: H. Abchiche, M. Mellal
Abstract:
The aim of this work is to study the parameters that influence the rheological behavior of a complex fluid (sodium Carboxyméthylcellulose solution), on a capillary rheometer. An installation has been made to be able to vary the diameter of trial conducts. The obtained results allowed us to deduce that: the diameter of trial conducts have a remarkable effect on the rheological responds.Keywords: bingham’s fluid, CMC, cylindrical conduit, rheological behavior
Procedia PDF Downloads 3325239 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 1445238 The Immunosuppressive Effects of Silymarin with Rapamaycin on the Proliferation and Apoptosis of T Cell
Authors: Nahid Eskandari, Marjan Ghagozolo, Ehsan Almasi
Abstract:
Introduction: Silymarin, as a polyphenolic flavonoid derived from milk thistle (Silybum marianum), is known to have antioxidant, immunomodulatory, antiproliferative, antifibrotic, and antiviral effects. The goal of this study was to determine immunosuppressive effect of Silymarin on proliferation and apoptosis of human T cells in comparison with Rapamycin and FK506. Methods: Peripheral Blood Mononuclear Cells (PBMCs) from healthy individuals were activated with Con A (5µg/ml) and then treated with Silymarin, Rapamycin and FK506 in various concentrations (0.001, 0.01, 0.1, 1, 10,100 and 200M) for 5 days. PBMCs were examined for proliferation using CFSE assay and the concentration that inhibited 50% of the cell proliferation (IC50) was determined for each treatment. For apoptosis assay using flow cytometry, PBMCs were activated with Con A and treated with IC50 dose of Silymarin, Rapamycin and FK506 for 5 days, then cell apoptosis was analysed by FITC-annexin V/PI staining and flow cytometry. The effects of Silymarin, Rapamycin and FK506 on the activation of PARP (poly ADP ribose polymerase) pathway in PBMCs stimulated with Con A and treated with IC50 dose of drugs for 5 days evaluated using the PathScan cleaved PARP sandwich ELISA kit. Results: This study showed that Silymarin had the ability to inhibit T cell proliferation in vitro. Moreover, our results indicated that 100 μM (P < 0.001) and 200 μM (P < 0.001) of Silymarin has more inhibitory effect on T cells proliferation than FK506 and Rapamycin. Our data showed that the effective doses (IC50) of Silymarin, FK506 and Rapamycin were 3×10-5 µM, 10-8 µM and 10-6 µM respectively. Data showed that the inhibitory effect of Silymarin, FK506 and Rapamycin on T cell proliferation was not due to cytotoxicity and none of these drugs at IC50 concentration had not affected the level of cleaved PARP. Conclusion: Silymarin could be a good candidate for immunosuppressive therapy for certain medical conditions with superior efficacy and lesser toxicity in comparison with other immunosuppressive drugs.Keywords: silymarin, immunosuppressive effect, rapamycin, immunology
Procedia PDF Downloads 2705237 A Techno-Economic Simulation Model to Reveal the Relevance of Construction Process Impact Factors for External Thermal Insulation Composite System (ETICS)
Authors: Virgo Sulakatko
Abstract:
The reduction of energy consumption of the built environment has been one of the topics tackled by European Commission during the last decade. Increased energy efficiency requirements have increased the renovation rate of apartment buildings covered with External Thermal Insulation Composite System (ETICS). Due to fast and optimized application process, a large extent of quality assurance is depending on the specific activities of artisans and are often not controlled. The on-site degradation factors (DF) have the technical influence to the façade and cause future costs to the owner. Besides the thermal conductivity, the building envelope needs to ensure the mechanical resistance and stability, fire-, noise-, corrosion and weather protection, and long-term durability. As the shortcomings of the construction phase become problematic after some years, the common value of the renovation is reduced. Previous work on the subject has identified and rated the relevance of DF to the technical requirements and developed a method to reveal the economic value of repair works. The future costs can be traded off to increased the quality assurance during the construction process. The proposed framework is describing the joint simulation of the technical importance and economic value of the on-site DFs of ETICS. The model is providing new knowledge to improve the resource allocation during the construction process by enabling to identify and diminish the most relevant degradation factors and increase economic value to the owner.Keywords: ETICS, construction technology, construction management, life cycle costing
Procedia PDF Downloads 4195236 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments
Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles
Abstract:
This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal
Procedia PDF Downloads 425235 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography
Authors: Y. Kumru, K. Enhos, H. Köymen
Abstract:
In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.Keywords: coded excitation, complementary golay codes, DiPhAS, medical ultrasound
Procedia PDF Downloads 2635234 Sedimentary, Diagenesis and Evaluation of High Quality Reservoir of Coarse Clastic Rocks in Nearshore Deep Waters in the Dongying Sag; Bohai Bay Basin
Authors: Kouassi Louis Kra
Abstract:
The nearshore deep-water gravity flow deposits in the Northern steep slope of Dongying depression, Bohai Bay basin, have been acknowledged as important reservoirs in the rift lacustrine basin. These deep strata term as coarse clastic sediment, deposit at the root of the slope have complex depositional processes and involve wide diagenetic events which made high-quality reservoir prediction to be complex. Based on the integrated study of seismic interpretation, sedimentary analysis, petrography, cores samples, wireline logging data, 3D seismic and lithological data, the reservoir formation mechanism deciphered. The Geoframe software was used to analyze 3-D seismic data to interpret the stratigraphy and build a sequence stratigraphic framework. Thin section identification, point counts were performed to assess the reservoir characteristics. The software PetroMod 1D of Schlumberger was utilized for the simulation of burial history. CL and SEM analysis were performed to reveal diagenesis sequences. Backscattered electron (BSE) images were recorded for definition of the textural relationships between diagenetic phases. The result showed that the nearshore steep slope deposits mainly consist of conglomerate, gravel sandstone, pebbly sandstone and fine sandstone interbedded with mudstone. The reservoir is characterized by low-porosity and ultra-low permeability. The diagenesis reactions include compaction, precipitation of calcite, dolomite, kaolinite, quartz cement and dissolution of feldspars and rock fragment. The main types of reservoir space are primary intergranular pores, residual intergranular pores, intergranular dissolved pores, intergranular dissolved pores, and fractures. There are three obvious anomalous high-porosity zones in the reservoir. Overpressure and early hydrocarbon filling are the main reason for abnormal secondary pores development. Sedimentary facies control the formation of high-quality reservoir, oil and gas filling preserves secondary pores from late carbonate cementation.Keywords: Bohai Bay, Dongying Sag, deep strata, formation mechanism, high-quality reservoir
Procedia PDF Downloads 1355233 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment
Authors: Arvind Kumar
Abstract:
The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.Keywords: Kanpur, marine environment, drain waste management, plastic fisher
Procedia PDF Downloads 715232 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 1395231 A Molecular-Level Study of Combining the Waste Polymer and High-Concentration Waste Cooking Oil as an Additive on Reclamation of Aged Asphalt Pavement
Authors: Qiuhao Chang, Liangliang Huang, Xingru Wu
Abstract:
In the United States, over 90% of the roads are paved with asphalt. The aging of asphalt is the most serious problem that causes the deterioration of asphalt pavement. Waste cooking oils (WCOs) have been found they can restore the properties of aged asphalt and promote the reuse of aged asphalt pavement. In our previous study, it was found the optimal WCO concentration to restore the aged asphalt sample should be in the range of 10~15 wt% of the aged asphalt sample. After the WCO concentration exceeds 15 wt%, as the WCO concentration increases, some important properties of the asphalt sample can be weakened by the addition of WCO, such as cohesion energy density, surface free energy density, bulk modulus, shear modulus, etc. However, maximizing the utilization of WCO can create environmental and economic benefits. Therefore, in this study, a new idea about using the waste polymer is another additive to restore the WCO modified asphalt that contains a high concentration of WCO (15-25 wt%) is proposed, which has never been reported before. In this way, both waste polymer and WCO can be utilized. The molecular dynamics simulation is used to study the effect of waste polymer on properties of WCO modified asphalt and understand the corresponding mechanism at the molecular level. The radial distribution function, self-diffusion, cohesion energy density, surface free energy density, bulk modulus, shear modulus, adhesion energy between asphalt and aggregate are analyzed to validate the feasibility of combining the waste polymer and WCO to restore the aged asphalt. Finally, the optimal concentration of waste polymer and WCO are determined.Keywords: reclaim aged asphalt pavement, waste cooking oil, waste polymer, molecular dynamics simulation
Procedia PDF Downloads 2205230 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study
Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim
Abstract:
Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts
Procedia PDF Downloads 2705229 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique
Authors: Dibakar Chakrabarty, Mebada Suiting
Abstract:
Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM
Procedia PDF Downloads 2485228 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning
Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan
Abstract:
The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning
Procedia PDF Downloads 1475227 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor
Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir
Abstract:
The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.Keywords: combustion chamber, hydrogen, jet momentum, NOx emission
Procedia PDF Downloads 2925226 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed
Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan
Abstract:
Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling
Procedia PDF Downloads 3825225 Knowledge Management in Public Sector Employees: A Case Study of Training Participants at National Institute of Management, Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
The purpose of this study is to investigate the current level of knowledge mapping skills of the public sector employees in Pakistan. National Institute of Management is one of the premiere public sector training organization for mid-career public sector employees in Pakistan. This study is conducted on participants of fourteen weeks long training course called Mid-Career Management Course (MCMC) which is mandatory for public sector employees in order to ascertain how to enhance their knowledge mapping skills. Methodology: Researcher used both qualitative and quantitative approach to conduct this study. Primary data about current level of participants’ understanding of knowledge mapping was collected through structured questionnaire. Later on, Participant Observation method was used where researchers acted as part of the group to gathered data from the trainees during their performance in training activities and tasks. Findings: Respondents of the study were examined for skills and abilities to organizing ideas, helping groups to develop conceptual framework, identifying critical knowledge areas of an organization, study large networks and identifying the knowledge flow using nodes and vertices, visualizing information, represent organizational structure etc. Overall, the responses varied in different skills depending on the performance and presentations. However, generally all participants have demonstrated average level of using both the IT and Non-IT K-mapping tools and techniques during simulation exercises, analysis paper de-briefing, case study reports, post visit presentation, course review, current issue presentation, syndicate meetings, and daily synopsis. Research Limitations: This study is conducted on a small-scale population of 67 public sector employees nominated by federal government to undergo 14 weeks extensive training program called MCMC (Mid-Career Management Course) at National Institute of Management, Peshawar, Pakistan. Results, however, reflects only a specific class of public sector employees i.e. working in grade 18 and having more than 5 years of work. Practical Implications: Research findings are useful for trainers, training agencies, government functionaries, and organizations working for capacity building of public sector employees.Keywords: knowledge management, km in public sector, knowledge management and professional development, knowledge management in training, knowledge mapping
Procedia PDF Downloads 2545224 Physico-Chemical Characteristics and Possibilities of Utilization of Elbasan Thermal Waters
Authors: Elvin Çomo, Edlira Tako, Albana Hasimi, Rrapo Ormeni, Olger Gjuzi, Mirela Ndrita
Abstract:
In Albania, only low enthalpy geothermal springs and wells are known, the temperatures of some of them are almost at the upper limits of low enthalpy, reaching over 60°C. These resources can be used to improve the country's energy balance, as well as for profitable economic purposes. The region of Elbasan has the greatest geothermal energy potential in Albania. This bass is one of the most popular and used in our country. This area is a surface with a number of sources, located in the form of a chain, in the sector between Llixha and Hidraj and constitutes a thermo-mineral basin with stable discharge and high temperature. The sources of Elbasan Springs, with the current average flow of thermo mineral water of 12-18 l/s and its temperature 55-65oC, have specific reserves of 39.6 GJ/m2 and potential power to install 2760 kW. For the assessment of physico-chemical parameters and heavy metals, water samples were taken at 5 monitoring stations throughout the year 2022. The levels of basic parameters were analyzed using ISO, EU and APHA 21-th edition standard methods. This study presents the current state of the physico-chemical parameters of this thermal basin, the evaluation of these parameters for curative activities and for industrial processes, as well as the integrated utilization of geothermal energy. Possibilities for using thermomineral waters for heating homes in the area around them or even further, depending on the flow from the source or geothermal well. Sensitization of Albanian investors, medical research and the community for the high economic and curative effectiveness, for the integral use of geothermal energy in this area and the development of the tourist sector. An analysis of the negative environmental impact from the use of thermal water is also provided.Keywords: geothermal energy, Llixha, physic-chemical parameters, thermal water
Procedia PDF Downloads 1385223 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method
Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao
Abstract:
In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.Keywords: defect levels, nanorods, photoluminescence, Raman modes
Procedia PDF Downloads 2415222 The Adoption of Leagility in Healthcare Services
Authors: Ana L. Martins, Luis Orfão
Abstract:
Healthcare systems have been subject to various research efforts aiming at process improvement under a lean approach. Another perspective, agility, has also been used, though in a lower scale, in order to analyse the ability of different hospital services to adapt to demand uncertainties. Both perspectives have a common denominator, the improvement of effectiveness and efficiency of the services in a healthcare setting context. Mixing the two approached allows, on one hand, to streamline the processes, and on the other hand the required flexibility to deal with demand uncertainty in terms of both volume and variety. The present research aims to analyse the impacts of the combination of both perspectives in the effectiveness and efficiency of an hospital service. The adopted methodology is based on a case study approach applied to the process of the ambulatory surgery service of Hospital de Lamego. Data was collected from direct observations, formal interviews and informal conversations. The analyzed process was selected according to three criteria: relevance of the process to the hospital, presence of human resources, and presence of waste. The customer of the process was identified as well as his perception of value. The process was mapped using flow chart, on a process modeling perspective, as well as through the use of Value Stream Mapping (VSM) and Process Activity Mapping. The Spaghetti Diagram was also used to assess flow intensity. The use of the lean tools enabled the identification of three main types of waste: movement, resource inefficiencies and process inefficiencies. From the use of the lean tools improvement suggestions were produced. The results point out that leagility cannot be applied to the process, but the application of lean and agility in specific areas of the process would bring benefits in both efficiency and effectiveness, and contribute to value creation if improvements are introduced in hospital’s human resources and facilities management.Keywords: case study, healthcare systems, leagility, lean management
Procedia PDF Downloads 2005221 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 4695220 Fault Tolerant Control of the Dynamical Systems Based on Internal Structure Systems
Authors: Seyed Mohammad Hashemi, Shahrokh Barati
Abstract:
The problem of fault-tolerant control (FTC) by accommodation method has been studied in this paper. The fault occurs in any system components such as actuators, sensors or internal structure of the system and leads to loss of performance and instability of the system. When a fault occurs, the purpose of the fault-tolerant control is designate strategy that can keep the control loop stable and system performance as much as possible perform it without shutting down the system. Here, the section of fault detection and isolation (FDI) system has been evaluated with regard to actuator's fault. Designing a fault detection and isolation system for a multi input-multi output (MIMO) is done by an unknown input observer, so the system is divided to several subsystems as the effect of other inputs such as disturbing given system state equations. In this observer design method, the effect of these disturbances will weaken and the only fault is detected on specific input. The results of this approach simulation can confirm the ability of the fault detection and isolation system design. After fault detection and isolation, it is necessary to redesign controller based on a suitable modification. In this regard after the use of unknown input observer theory and obtain residual signal and evaluate it, PID controller parameters redesigned for iterative. Stability of the closed loop system has proved in the presence of this method. Also, In order to soften the volatility caused by Annie variations of the PID controller parameters, modifying Sigma as a way acceptable solution used. Finally, the simulation results of three tank popular example confirm the accuracy of performance.Keywords: fault tolerant control, fault detection and isolation, actuator fault, unknown input observer
Procedia PDF Downloads 4525219 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand
Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk
Abstract:
Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment
Procedia PDF Downloads 1295218 Study on the Mechanism of CO₂-Viscoelastic Fluid Synergistic Oil Displacement in Tight Sandstone Reservoirs
Authors: Long Long Chen, Xinwei Liao, Shanfa Tang, Shaojing Jiang, Ruijia Tang, Rui Wang, Shu Yun Feng, Si Yao Wang
Abstract:
Tight oil reservoirs have poor physical properties, insufficient formation energy, and low natural productivity; it is necessary to effectively improve their crude oil recovery. CO₂ flooding is an important technical means to enhance oil recovery and achieve effective CO₂ storage in tight oil reservoirs, but its heterogeneity is strong, which makes CO₂ flooding prone to gas channeling and poor recovery. Aiming at the problem of gas injection channeling, combined with the excellent performance of low interfacial tension viscoelastic fluid (GOBTK), the research on CO₂-low interfacial tension viscoelastic fluid synergistic oil displacement in tight reservoirs was carried out, and the synergy of CO₂ and low interfacial tension viscoelastic fluid was discussed. Oil displacement mechanism. Experiments show that GOBTK has good injectability in tight oil reservoirs (Kg=0.141~0.793mD); CO₂-0.4% GOBTK synergistic flooding can improve the recovery factor of low permeability layers (31.41%) under heterogeneous (gradient difference of 10) conditions the) effect is better than that of CO₂ flooding (0.56%) and 0.4% GOBT-water flooding (20.99%); CO₂-GOBT synergistic oil displacement mechanism includes: 1) The formation of CO₂ foam increases the flow resistance of viscoelastic fluid, forcing the displacement fluid to flow 2) GOBTK can emulsify and disperse residual oil into small oil droplets, and smoothly pass through narrow pores to produce; 3) CO₂ dissolved in GOBTK synergistically enhances the water wettability of the core, and the use of viscosity Elastomeric fluid injection and stripping of residual oil; 4) CO₂-GOBTK synergy superimposes multiple mechanisms, effectively improving the swept volume and oil washing efficiency of the injected fluid to the reservoir.Keywords: tight oil reservoir, CO₂ flooding, low interfacial tension viscoelastic fluid flooding, synergistic oil displacement, EOR mechanism
Procedia PDF Downloads 1835217 Investigating the Energy Harvesting Potential of a Pitch-Plunge Airfoil Subjected to Fluctuating Wind
Authors: Magu Raam Prasaad R., Venkatramani Jagadish
Abstract:
Recent studies in the literature have shown that randomly fluctuating wind flows can give rise to a distinct regime of pre-flutter oscillations called intermittency. Intermittency is characterized by the presence of sporadic bursts of high amplitude oscillations interspersed amidst low-amplitude aperiodic fluctuations. The focus of this study is on investigating the energy harvesting potential of these intermittent oscillations. Available literature has by and large devoted its attention on extracting energy from flutter oscillations. The possibility of harvesting energy from pre-flutter regimes have remained largely unexplored. However, extracting energy from violent flutter oscillations can be severely detrimental to the structural integrity of airfoil structures. Consequently, investigating the relatively stable pre-flutter responses for energy extraction applications is of practical importance. The present study is devoted towards addressing these concerns. A pitch-plunge airfoil with cubic hardening nonlinearity in the plunge and pitch degree of freedom is considered. The input flow fluctuations are modelled using a sinusoidal term with randomly perturbed frequencies. An electromagnetic coupling is provided to the pitch-plunge equations, such that, energy from the wind induced vibrations of the structural response are extracted. With the mean flow speed as the bifurcation parameter, a fourth order Runge-Kutta based time marching algorithm is used to solve the governing aeroelastic equations with electro-magnetic coupling. The harnessed energy from the intermittency regime is presented and the results are discussed in comparison to that obtained from the flutter regime. The insights from this study could be useful in health monitoring of aeroelastic structures.Keywords: aeroelasticity, energy harvesting, intermittency, randomly fluctuating flows
Procedia PDF Downloads 1865216 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound
Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo
Abstract:
Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.Keywords: mitral regurgitation, vena contracta, color doppler, image processing
Procedia PDF Downloads 3705215 Flood Mapping and Inoudation on Weira River Watershed (in the Case of Hadiya Zone, Shashogo Woreda)
Authors: Alilu Getahun Sulito
Abstract:
Exceptional floods are now prevalent in many places in Ethiopia, resulting in a large number of human deaths and property destruction. Lake Boyo watershed, in particular, had also traditionally been vulnerable to flash floods throughout the Boyo watershed. The goal of this research is to create flood and inundation maps for the Boyo Catchment. The integration of Geographic information system(GIS) technology and the hydraulic model (HEC-RAS) were utilized as methods to attain the objective. The peak discharge was determined using Fuller empirical methodology for intervals of 5, 10, 15, and 25 years, and the results were 103.2 m3/s, 158 m3/s, 222 m3/s, and 252 m3/s, respectively. River geometry, boundary conditions, manning's n value of varying land cover, and peak discharge at various return periods were all entered into HEC-RAS, and then an unsteady flow study was performed. The results of the unsteady flow study demonstrate that the water surface elevation in the longitudinal profile rises as the different periods increase. The flood inundation charts clearly show that regions on the right and left sides of the river with the greatest flood coverage were 15.418 km2 and 5.29 km2, respectively, flooded by 10,20,30, and 50 years. High water depths typically occur along the main channel and progressively spread to the floodplains. The latest study also found that flood-prone areas were disproportionately affected on the river's right bank. As a result, combining GIS with hydraulic modelling to create a flood inundation map is a viable solution. The findings of this study can be used to care again for the right bank of a Boyo River catchment near the Boyo Lake kebeles, according to the conclusion. Furthermore, it is critical to promote an early warning system in the kebeles so that people can be evacuated before a flood calamity happens. Keywords: Flood, Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation MappingKeywords: Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation Mapping
Procedia PDF Downloads 485214 Simulation of the Visco-Elasto-Plastic Deformation Behaviour of Short Glass Fibre Reinforced Polyphthalamides
Authors: V. Keim, J. Spachtholz, J. Hammer
Abstract:
The importance of fibre reinforced plastics continually increases due to the excellent mechanical properties, low material and manufacturing costs combined with significant weight reduction. Today, components are usually designed and calculated numerically by using finite element methods (FEM) to avoid expensive laboratory tests. These programs are based on material models including material specific deformation characteristics. In this research project, material models for short glass fibre reinforced plastics are presented to simulate the visco-elasto-plastic deformation behaviour. Prior to modelling specimens of the material EMS Grivory HTV-5H1, consisting of a Polyphthalamide matrix reinforced by 50wt.-% of short glass fibres, are characterized experimentally in terms of the highly time dependent deformation behaviour of the matrix material. To minimize the experimental effort, the cyclic deformation behaviour under tensile and compressive loading (R = −1) is characterized by isothermal complex low cycle fatigue (CLCF) tests. Combining cycles under two strain amplitudes and strain rates within three orders of magnitude and relaxation intervals into one experiment the visco-elastic deformation is characterized. To identify visco-plastic deformation monotonous tensile tests either displacement controlled or strain controlled (CERT) are compared. All relevant modelling parameters for this complex superposition of simultaneously varying mechanical loadings are quantified by these experiments. Subsequently, two different material models are compared with respect to their accuracy describing the visco-elasto-plastic deformation behaviour. First, based on Chaboche an extended 12 parameter model (EVP-KV2) is used to model cyclic visco-elasto-plasticity at two time scales. The parameters of the model including a total separation of elastic and plastic deformation are obtained by computational optimization using an evolutionary algorithm based on a fitness function called genetic algorithm. Second, the 12 parameter visco-elasto-plastic material model by Launay is used. In detail, the model contains a different type of a flow function based on the definition of the visco-plastic deformation as a part of the overall deformation. The accuracy of the models is verified by corresponding experimental LCF testing.Keywords: complex low cycle fatigue, material modelling, short glass fibre reinforced polyphthalamides, visco-elasto-plastic deformation
Procedia PDF Downloads 2155213 Modeling of Cold Tube Drawing with a Fixed Plug by Finite Element Method and Determination of Optimum Drawing Parameters
Authors: E. Yarar, E. A. Guven, S. Karabay
Abstract:
In this study, a comprehensive simulation was made for the cold tube drawing with fixed plug. The cold tube drawing process is preferred due to its high surface quality and the high mechanical properties. In drawing processes applied to materials with low plastic deformability, cracks can occur on the surfaces and the process efficiency decreases. The aim of the work is to investigate the effects of different drawing parameters on drawing forces and stresses. In the simulations, optimum conditions were investigated for four different materials, Ti64Al4V, AA5052, AISI4140, and C365. One of the most important parameters for the cold drawing process is the die angle. Three dies were designed for the analysis with semi die angles of 5°, 10°, and 15°. Three different parameters were used for the friction coefficient between die and the material. In the simulations, reduction of area and the drawing speed is kept constant. Drawing is done in one pass. According to the simulation results, the highest drawing forces were obtained in Ti64Al4V. As the semi die angle increases, the drawing forces decrease. The change in semi die angle was most effective on Ti64Al4V. Increasing the coefficient of friction is another effect that increases the drawing forces. The increase in the friction coefficient has also increased in drawing stresses. The increase in die angle also increased the drawing stress distribution for the other three materials outside C365. According to the results of the analysis, it is found that the designed drawing die is suitable for drawing. The lowest drawing stress distribution and drawing forces were obtained for AA5052. Drawing die parameters have a direct effect on the results. In addition, lubricants used for drawing have a significant effect on drawing forces.Keywords: cold tube drawing, drawing force, drawing stress, semi die angle
Procedia PDF Downloads 166