Search results for: joint load forces
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4742

Search results for: joint load forces

2102 A Review Of Blended Wing Body And Slender Delta Wing Performance Utilizing Experimental Techniques And Computational Fluid Dynamics

Authors: Abhiyan Paudel, Maheshwaran M Pillai

Abstract:

This paper deals with the optimization and comparison of slender delta wing and blended wing body. The objective is to study the difference between the two wing types and analyze the various aerodynamic characteristics of both of these types.The blended-wing body is an aircraft configuration that has the potential to be more efficient than conventional large transport aircraft configurations with the same capability. The purported advantages of the BWB approach are efficient high-lift wings and a wide airfoil-shaped body. Similarly, symmetric separation vortices over slender delta wing may become asymmetric as the angle of attack is increased beyond a certain value, causing asymmetric forces even at symmetric flight conditions. The transition of the vortex pattern from being symmetric to asymmetric over symmetric bodies under symmetric flow conditions is a fascinating fluid dynamics problem and of major importance for the performance and control of high-maneuverability flight vehicles that favor the use of slender bodies. With the use of Star CCM, we analyze both the fluid properties. The CL, CD and CM were investigated in steady state CFD of BWB at Mach 0.3 and through wind tunnel experiments on 1/6th model of BWB at Mach 0.1. From CFD analysis pressure variation, Mach number contours and turbulence area was observed.

Keywords: Coefficient of Lift, Coefficient of Drag, CFD=Computational Fluid Dynamics, BWB=Blended Wing Body, slender delta wing

Procedia PDF Downloads 531
2101 Uncloaking Priceless Pieces of Evidence: Psychotherapy with an Older New Zealand Man; Contributions to Understanding Hidden Historical Phenomena and the Trans-Generation Transmission of Silent and Un-Witnessed Trauma

Authors: Joanne M. Emmens

Abstract:

This paper makes use of the case notes of a single psychoanalytically informed psychotherapy of a now 72-year-old man over a four-year period to explore the potential of qualitative data to be incorporated into a research methodology that can contribute theory and knowledge to the wider professional community involved in mental health care. The clinical material arising out of any psychoanalysis provides a potentially rich source of clinical data that could contribute valuably to our historical understanding of both individual and societal traumata. As psychoanalysis is primarily an investigation, it is argued that clinical case material is a rich source of qualitative data which has relevance for sociological and historical understandings and that it can potentially aluminate important ‘gaps’ and collective blind spots that manifest unconsciously and are a contributing factor in the transmission of trauma, silently across generations. By attending to this case material the hope is to illustrate the value of using a psychoanalytic centred methodology. It is argued that the study of individual defences and the manner in which they come into consciousness, allows an insight into group defences and the unconscious forces that contribute to the silencing or un-noticing of important sources (or originators) of mental suffering.

Keywords: dream furniture (Bion) and psychotic functioning, reverie, screen memories, selected fact

Procedia PDF Downloads 199
2100 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients

Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado

Abstract:

Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.

Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off

Procedia PDF Downloads 191
2099 Fault Tolerant (n,k)-star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj K. Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 512
2098 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 465
2097 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 123
2096 An Approach for Modeling CMOS Gates

Authors: Spyridon Nikolaidis

Abstract:

A modeling approach for CMOS gates is presented based on the use of the equivalent inverter. A new model for the inverter has been developed using a simplified transistor current model which incorporates the nanoscale effects for the planar technology. Parametric expressions for the output voltage are provided as well as the values of the output and supply current to be compatible with the CCS technology. The model is parametric according the input signal slew, output load, transistor widths, supply voltage, temperature and process. The transistor widths of the equivalent inverter are determined by HSPICE simulations and parametric expressions are developed for that using a fitting procedure. Results for the NAND gate shows that the proposed approach offers sufficient accuracy with an average error in propagation delay about 5%.

Keywords: CMOS gate modeling, inverter modeling, transistor current mode, timing model

Procedia PDF Downloads 423
2095 A Review on the Studies on Mechanical and Tribological Properties of Aluminum and Magnesium Alloys Welded by Friction Stir Welding

Authors: Sukhdeep Singh Gill, Gurbhinder Singh Brar

Abstract:

In recent years, friction stir welding (FSW) has attracted the main attention of the concerned researcher especially in case of joining of nonferrous alloys like aluminum and magnesium due to its unmatchable properties with respect to other welding techniques. Friction stir welding is a solid state welding process which is most suitable for the welding of nonferrous alloys, especially aluminum and magnesium alloys. Aluminum and magnesium alloys are widely used for structural applications of all types of automobiles due to their superior mechanical properties with their low density. This paper deals with the critical review of the different properties (like tensile strength, microhardness, impact strength, corrosion resistance, and metallurgical investigation on SEM) obtained by the FSW of aluminum and magnesium alloys. After a critical review of the existing published literature on concerned topics, all the properties of welding joins are compared in the tabulated manner to optimize the selection of materials and FSW parameters according to mechanical and tribological properties. Different tool designs used for the FSW process are also thoroughly studied, and the influence of the design of the tool used in FSW on the different properties has also been incorporated in this paper. It has been observed from the existing published literature that FSW is the most effective and practical technique for joining the non ferrous alloys especially aluminum and magnesium alloys, and among the different FSW tools, left hand threaded tri-flute (LHTTF) tool is best for the welding of non ferrous alloys like aluminum and magnesium alloys which gives the superior mechanical properties to welding joint.

Keywords: aluminum, friction stir welding, magnesium, structural applications, tool design

Procedia PDF Downloads 179
2094 Static Response of Homogeneous Clay Stratum to Imposed Structural Loads

Authors: Aaron Aboshio

Abstract:

Numerical study of the static response of homogeneous clay stratum considering a wide range of cohesion and subject to foundation loads is presented. The linear elastic–perfectly plastic constitutive relation with the von Mises yield criterion were utilised to develop a numerically cost effective finite element model for the soil while imposing a rigid body constrain to the foundation footing. From the analyses carried out, estimate of the bearing capacity factor, Nc as well as the ultimate load-carrying capacities of these soils, effect of cohesion on foundation settlements, stress fields and failure propagation were obtained. These are consistent with other findings in the literature and hence can be a useful guide in design of safe foundations in clay soils for buildings and other structure.

Keywords: bearing capacity factors, finite element method, safe bearing pressure, structure-soil interaction

Procedia PDF Downloads 302
2093 Optimizing Approach for Sifting Process to Solve a Common Type of Empirical Mode Decomposition Mode Mixing

Authors: Saad Al-Baddai, Karema Al-Subari, Elmar Lang, Bernd Ludwig

Abstract:

Empirical mode decomposition (EMD), a new data-driven of time-series decomposition, has the advantage of supposing that a time series is non-linear or non-stationary, as is implicitly achieved in Fourier decomposition. However, the EMD suffers of mode mixing problem in some cases. The aim of this paper is to present a solution for a common type of signals causing of EMD mode mixing problem, in case a signal suffers of an intermittency. By an artificial example, the solution shows superior performance in terms of cope EMD mode mixing problem comparing with the conventional EMD and Ensemble Empirical Mode decomposition (EEMD). Furthermore, the over-sifting problem is also completely avoided; and computation load is reduced roughly six times compared with EEMD, an ensemble number of 50.

Keywords: empirical mode decomposition (EMD), mode mixing, sifting process, over-sifting

Procedia PDF Downloads 395
2092 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 141
2091 Performance, Scalability and Reliability Engineering: Shift Left and Shift Right Approach

Authors: Jyothirmayee Pola

Abstract:

Ideally, a test-driven development (TDD) or agile or any other process should be able to define and implement performance, scalability, and reliability (PSR) of the product with a higher quality of service (QOS) and should have the ability to fix any PSR issues with lesser cost before it hits the production. Most PSR test strategies for new product introduction (NPI) include assumptions about production load requirements but never accurate. NPE (New product Enhancement) include assumptions for new features that are being developed whilst workload distribution for older features can be derived by analyzing production transactions. This paper talks about how to shift left PSR towards design phase of release management process to get better QOS w.r.t PSR for any product under development. It also explains the ROI for future customer onboarding both for Service Oriented Architectures (SOA) and Microservices architectures and how to define PSR requirements.

Keywords: component PSR, performance engineering, performance tuning, reliability, return on investment, scalability, system PSR

Procedia PDF Downloads 75
2090 Geared Turbofan with Water Alcohol Technology

Authors: Abhinav Purohit, Shruthi S. Pradeep

Abstract:

In today’s world, aviation industries are using turbofan engines (permutation of turboprop and turbojet) which meet the obligatory requirements to be fuel competent and to produce enough thrust to propel an aircraft. But one can imagine increasing the work output of this particular machine by reducing the input power. In striving to improve technologies, especially to augment the efficiency of the engine with some adaptations, which can be crooked to new concepts by introducing a step change in the turbofan engine development. One hopeful concept is, to de-couple the fan with the help of reduction gear box in a two spool shaft engine from the rest of the machinery to get more work output with maximum efficiency by reducing the load on the turbine shaft. By adapting this configuration we can get an additional degree of freedom to better optimize each component at different speeds. Since the components are running at different speeds we can get hold of preferable efficiency. Introducing water alcohol mixture to this concept would really help to get better results.

Keywords: emissions, fuel consumption, more power, turbofan

Procedia PDF Downloads 435
2089 Mathematical Modelling of Slag Formation in an Entrained-Flow Gasifier

Authors: Girts Zageris, Vadims Geza, Andris Jakovics

Abstract:

Gasification processes are of great interest due to their generation of renewable energy in the form of syngas from biodegradable waste. It is, therefore, important to study the factors that play a role in the efficiency of gasification and the longevity of the machines in which gasification takes place. This study focuses on the latter, aiming to optimize an entrained-flow gasifier by reducing slag formation on its walls to reduce maintenance costs. A CFD mathematical model for an entrained-flow gasifier is constructed – the model of an actual gasifier is rendered in 3D and appropriately meshed. Then, the turbulent gas flow in the gasifier is modeled with the realizable k-ε approach, taking devolatilization, combustion and coal gasification into account. Various such simulations are conducted, obtaining results for different air inlet positions and by tracking particles of varying sizes undergoing devolatilization and gasification. The model identifies potential problematic zones where most particles collide with the gasifier walls, indicating risk regions where ash deposits could most likely form. In conclusion, the effects on the formation of an ash layer of air inlet positioning and particle size allowed in the main gasifier tank are discussed, and possible solutions for decreasing a number of undesirable deposits are proposed. Additionally, an estimate of the impact of different factors such as temperature, gas properties and gas content, and different forces acting on the particles undergoing gasification is given.

Keywords: biomass particles, gasification, slag formation, turbulence k-ε modelling

Procedia PDF Downloads 286
2088 The Prevalence of Intubation Induced Dental Complications among Hospitalized Patients

Authors: Dorsa Rahi, Arghavan Tonkanbonbi, Soheila Manifar, Behzad Jafvarnejad

Abstract:

Background and Aim: Intraoral manipulation is performed during endotracheal intubation for general anesthesia, which can traumatize the soft and hard tissue in the oral cavity and cause postoperative pain and discomfort. Dental trauma is the most common complication of intubation. This study aimed to assess the prevalence of dental complications due to intubation in patients hospitalized in Imam Khomeini Hospital during 2018-2019. Materials and Methods: A total of 805 patients presenting to the Cancer Institute of Imam Khomeini Hospital for preoperative anesthesia consultation were randomly enrolled. A dentist interviewed the patients and performed a comprehensive clinical oral examination preoperatively. The patients underwent clinical oral examination by another dentist postoperatively. Results: No significant correlation was found between dental trauma (tooth fracture, tooth mobility, or soft tissue injury) after intubation with the age or gender of patients. According to the Wilcoxon test and McNemar-Bowker Test, the rate of mobility before the intubation was significantly different from that after the intubation (P=0.000). Maxillary central incisors, maxillary left canine and mandibular right and left central incisors had the highest rate of fracture. Conclusion: Mobile teeth before the intubation are at higher risk of avulsion and aspiration during the procedure. Patients with primary temporomandibular joint disorders are more susceptible to post-intubation trismus.

Keywords: oral trauma, dental trauma, intubation, anesthesia

Procedia PDF Downloads 148
2087 2-Dimensional Kinematic Analysis on Sprint Start with Sprinting Performance of Novice Athletes

Authors: Satpal Yadav, Biswajit Basumatary, Arvind S. Sajwan, Ranjan Chakravarty

Abstract:

The purpose of the study was to assess the effect of 2D kinematical selected variables on sprint start with sprinting performance of novice athletes. Six (3 National and 3 State level) athletes of sports authority of India, Guwahati has been selected for this study. The mean (M) and standard deviation (SD) of sprinters were age (17.44, 1.55), height (1.74m, .84m), weight (62.25 kg, 4.55), arm length (65.00 cm, 3.72) and leg length (96.35 cm, 2.71). Biokin-2D motion analysis system V4.5 can be used for acquiring two-dimensional kinematical data/variables on sprint start with Sprinting Performance. For the purpose of kinematic analysis a standard motion driven camera which frequency of the camera was 60 frame/ second i.e. handy camera of Sony Company were used. The sequence of photographic was taken under controlled condition. The distance of the camera from the athletes was 12 mts away and was fixed at 1.2-meter height. The result was found that National and State level athletes significant difference in there, trajectory knee, trajectory ankle, displacement knee, displacement ankle, linear velocity knee, linear velocity ankle, and linear acceleration ankle whereas insignificant difference was found between National and State level athletes in their linear acceleration knee joint on sprint start with sprinting performance. For all the Statistical test the level of significance was set at p<0.05.

Keywords: 2D kinematic analysis, sprinting performance, novice athletes, sprint start

Procedia PDF Downloads 323
2086 Lactic Acid Solution and Aromatic Vinegar Nebulization to Improve Hunted Wild Boar Carcass Hygiene at Game-Handling Establishment: Preliminary Results

Authors: Rossana Roila, Raffaella Branciari, Lorenzo Cardinali, David Ranucci

Abstract:

The wild boar (Sus scrofa) population has strongly increased across Europe in the last decades, also causing severe fauna management issues. In central Italy, wild boar is the main hunted wild game species, with approximately 40,000 animals killed per year only in the Umbria region. The meat of the game is characterized by high-quality nutritional value as well as peculiar taste and aroma, largely appreciated by consumers. This type of meat and products thereof can meet the current consumers’ demand for higher quality foodstuff, not only from a nutritional and sensory point of view but also in relation to environmental sustainability, the non-use of chemicals, and animal welfare. The game meat production chain is characterized by some gaps from a hygienic point of view: the harvest process is usually conducted in a wild environment where animals can be more easily contaminated during hunting and subsequent practices. The definition and implementation of a certified and controlled supply chain could ensure quality, traceability and safety for the final consumer and therefore promote game meat products. According to European legislation in some animal species, such as bovine, the use of weak acid solutions for carcass decontamination is envisaged in order to ensure the maintenance of optimal hygienic characteristics. A preliminary study was carried out to evaluate the applicability of similar strategies to control the hygienic level of wild boar carcasses. The carcasses, harvested according to the selective method and processed into the game-handling establishment, were treated by nebulization with two different solutions: a 2% food-grade lactic acid solution and aromatic vinegar. Swab samples were performed before treatment and in different moments after-treatment of the carcasses surfaces and subsequently tested for Total Aerobic Mesophilic Load, Total Aerobic Psychrophilic Load, Enterobacteriaceae, Staphylococcus spp. and lactic acid bacteria. The results obtained for the targeted microbial populations showed a positive effect of the application of the lactic acid solution on all the populations investigated, while aromatic vinegar showed a lower effect on bacterial growth. This study could lay the foundations for the optimization of the use of a lactic acid solution to treat wild boar carcasses aiming to guarantee good hygienic level and safety of meat.

Keywords: game meat, food safety, process hygiene criteria, microbial population, microbial growth, food control

Procedia PDF Downloads 158
2085 Dynamic Voltage Restorer Control Strategies: An Overview

Authors: Arvind Dhingra, Ashwani Kumar Sharma

Abstract:

Power quality is an important parameter for today’s consumers. Various custom power devices are in use to give a proper supply of power quality. Dynamic Voltage Restorer is one such custom power device. DVR is a static VAR device which is used for series compensation. It is a power electronic device that is used to inject a voltage in series and in synchronism to compensate for the sag in voltage. Inductive Loads are a major source of power quality distortion. The induction furnace is one such typical load. A typical induction furnace is used for melting the scrap or iron. At the time of starting the melting process, the power quality is distorted to a large extent especially with the induction of harmonics. DVR is one such approach to mitigate these harmonics. This paper is an attempt to overview the various control strategies being followed for control of power quality by using DVR. An overview of control of harmonics using DVR is also presented.

Keywords: DVR, power quality, harmonics, harmonic mitigation

Procedia PDF Downloads 378
2084 Solar Photovoltaic Pumping and Water Treatment Tools: A Case Study in Ethiopian Village

Authors: Corinna Barraco, Ornella Salimbene

Abstract:

This research involves the Ethiopian locality of Jeldi (North Africa), an area particularly affected by water shortage and in which the pumping and treatment of drinking water are extremely sensitive issues. The study aims to develop and apply low-cost tools for the design of solar water pumping and water purification systems in a not developed country. Consequently, two technical tools have been implemented in Excel i) Solar photovoltaic Pumping (Spv-P) ii) Water treatment (Wt). The Spv-P tool was applied to the existing well (depth 110 [m], dynamic water level 90 [m], static water level 53 [m], well yield 0.1728 [m³h⁻¹]) in the Jeldi area, where estimated water demand is about 50 [m3d-1]. Through the application of the tool, it was designed the water extraction system of the well, obtaining the number of pumps and solar panels necessary for water pumping from the well of Jeldi. Instead, the second tool Wt has been applied in the subsequent phase of extracted water treatment. According to the chemical-physical parameters of the water, Wt returns as output the type of purification treatment(s) necessary to potable the extracted water. In the case of the well of Jeldi, the tool identified a high criticality regarding the turbidity parameter (12 [NTU] vs 5 [NTU]), and a medium criticality regarding the exceeding limits of sodium concentration (234 [mg/L Na⁺] vs 200 [mg/L Na⁺]) and ammonia (0.64 [mg/L NH³-N] vs 0.5 [mg/L NH³-N]). To complete these tools, two specific manuals are provided for the users. The joint use of the two tools would help reduce problems related to access to water resources compared to the current situation and represents a simplified solution for the design of pumping systems and analysis of purification treatments to be performed in undeveloped countries.

Keywords: drinking water, Ethiopia, treatments, water pumping

Procedia PDF Downloads 156
2083 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy

Authors: Ye Zhang, Chuanjun Liu

Abstract:

In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.

Keywords: composite structure, damage tolerance, impact threat, probabilistic

Procedia PDF Downloads 308
2082 Evaluation of Progressive Collapse of Transmission Tower

Authors: Jeong-Hwan Choi, Hyo-Sang Park, Tae-Hyung Lee

Abstract:

The transmission tower is one of the crucial lifeline structures in a modern society, and it needs to be protected against extreme loading conditions. However, the transmission tower is a very complex structure and, therefore, it is very difficult to simulate the actual damage and the collapse behavior of the tower structure. In this study, the actual collapse behavior of the transmission tower due to lateral loading conditions such as wind load is evaluated through the computational simulation. For that, a progressive collapse procedure is applied to the simulation. In this procedure, after running the simulation, if a member of the tower structure fails, the failed member is removed and the simulation run again. The 154kV transmission tower is selected for this study. The simulation is performed by nonlinear static analysis procedure, namely pushover analysis, using OpenSEES, an earthquake simulation platform. Three-dimensional finite element models of those towers are developed.

Keywords: transmission tower, OpenSEES, pushover, progressive collapse

Procedia PDF Downloads 357
2081 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures

Authors: Solanki Ravirajsinh

Abstract:

In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.

Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services

Procedia PDF Downloads 27
2080 Performance of Stiffened Slender Built up Steel I-Columns

Authors: M. E. Abou-Hashem El Dib, M. K. Swailem, M. M. Metwally, A. I. El Awady

Abstract:

The present work illustrates a parametric study for the effect of stiffeners on the performance of slender built up steel I-columns. To achieve the desired analysis, finite element technique is used to develop nonlinear three-dimensional models representing the investigated columns. The finite element program (ANSYS 13.0) is used as a calculation tool for the necessary nonlinear analysis. A validation of the obtained numerical results is achieved. The considered parameters in the study are the column slenderness ratio and the horizontal stiffener's dimensions as well as the number of stiffeners. The dimensions of the stiffeners considered in the analysis are the stiffener width and the stiffener thickness. Numerical results signify a considerable effect of stiffeners on the performance and failure load of slender built up steel I-columns.

Keywords: columns, local buckling, slender, stiffener, thin walled section

Procedia PDF Downloads 319
2079 Lower Extremity Injuries and Landing Kinematics and Kinetics in University-Level Netball Players

Authors: Henriette Hammill

Abstract:

Background: Safe landing in netball is fundamental. Research on the biomechanics of multidirectional landings is lacking, especially among netball players. Furthermore, few studies reporting the associations between lower extremity injuries and landing kinematics and kinetics in university-level netball players have been undertaken. Objectives: The aim is to determine the relationships between lower extremity injuries and landing kinematics and kinetics in university-level netball players that have been undertaken during a single season. Methods: This cross-sectional repeated measure study consisted of ten university-level female netball players. The injury prevalence data was collected during the 2022 netball season. The kinematic and kinetic data were collected during multidirectional single-leg landing trials and was collected. Results: Generally, the ankle strength of netball players was below average. There was evidence of negative correlations between the ankle range of motion (ROM), and muscle activity amplitudes. A lack of evidence precluded the conclusion that lower extremity dominance was a predisposing factor for injury and that any specific body part was most likely to be injured among netball players. Conclusion: Landing forces and muscle activity are direction-dependent, especially for the dominant extremity. Lower extremity strength and neuromuscular control (NMC) across multiple jump-landing directions should be an area of focus for female netball players.

Keywords: netball players, landing kinetics, landing kinematics, lower extremity

Procedia PDF Downloads 47
2078 Selective Laser Melting (SLM) Process and Its Influence on the Machinability of TA6V Alloy

Authors: Rafał Kamiński, Joel Rech, Philippe Bertrand, Christophe Desrayaud

Abstract:

Titanium alloys are among the most important material in the aircraft industry, due to its low density, high strength, and corrosion resistance. However, these alloys are considered as difficult to machine because they have poor thermal properties and high reactivity with cutting tools. The Selective Laser Melting (SLM) process becomes even more popular through industry since it enables the design of new complex components, that cannot be manufactured by standard processes. However, the high temperature reached during the melting phase as well as the several rapid heating and cooling phases, due to the movement of the laser, induce complex microstructures. These microstructures differ from conventional equiaxed ones obtained by casting+forging. Parts obtained by SLM have to be machined in order calibrate the dimensions and the surface roughness of functional surfaces. The ball milling technique is widely applied to finish complex shapes. However, the machinability of titanium is strongly influenced by the microstructure. So the objective of this work is to investigate the influence of the SLM process, i.e. microstructure, on the machinability of titanium, compared to conventional forming processes. The machinability is analyzed by measuring surface roughness, cutting forces, cutting tool wear for a range of cutting conditions (depth of cut ap, feed per tooth fz, spindle speed N) in accordance with industrial practices.

Keywords: ball milling, microstructure, surface roughness, titanium

Procedia PDF Downloads 297
2077 Merit Order of Indonesian Coal Mining Sources to Meet the Domestic Power Plants Demand

Authors: Victor Siahaan

Abstract:

Coal still become the most important energy source for electricity generation known for its contribution which take the biggest portion of energy mix that a country has, for example Indonesia. The low cost of electricity generation and quite a lot of resources make this energy still be the first choice to fill the portion of base load power. To realize its significance to produce electricity, it is necessary to know the amount of coal (volume) needed to ensure that all coal power plants (CPP) in a country can operate properly. To secure the volume of coal, in this study, discussion was carried out regarding the identification of coal mining sources in Indonesia, classification of coal typical from each coal mining sources, and determination of the port of loading. By using data above, the sources of coal mining are then selected to feed certain CPP based on the compatibility of the coal typical and the lowest transport cost.

Keywords: merit order, Indonesian coal mine, electricity, power plant

Procedia PDF Downloads 153
2076 Performance Assessment of PV Based Grid Connected Solar Plant with Varying Load Conditions

Authors: Kusum Tharani, Ratna Dahiya

Abstract:

This paper aims to analyze the power flow of a grid connected 100-kW Photovoltaic(PV) array connected to a 25-kV grid via a DC-DC boost converter and a three-phase three-level Voltage Source Converter (VSC). Maximum Power Point Tracking (MPPT) is implemented in the boost converter bymeans of a Simulink model using the 'Perturb & Observe' technique. First, related papers and technological reports were extensively studied and analyzed. Accordingly, the system is tested under various loading conditions. Power flow analysis is done using the Newton-Raphson method in Matlab environment. Finally, the system is subject to Single Line to Ground Fault and Three Phase short circuit. The results are simulated under the grid-connected operating model.

Keywords: grid connected PV Array, Newton-Raphson Method, power flow analysis, three phase fault

Procedia PDF Downloads 553
2075 Mosque as a Sustainable Model in Iranian Traditional Urban Development: The Case Study of Vakil Mosque in Shiraz

Authors: Amir Hossein Ashari, Sedighe Erfan Manesh

Abstract:

When investigating Iranian traditional and historical urban development, such as that seen in Shiraz, our attention is drawn to mosques as a focal point. Vakil Mosque in Shiraz is completely consistent, coordinated and integrated with the Bazaar, square and school. This is a significant example of traditional urban development. The position of the mosque in the most important urban joint near bazaar in a way that it is considered part of the bazaar structure are factors that have given it social, political, and economic roles in addition to the original religious role. These are among characteristics of sustainable development. The mosque has had an important effect in formation of the city because it is connected to main gates. In terms of access, the mosque has different main and peripheral access paths from different parts of the city. The courtyard of the mosque was located next to the main elements of the city so that it was considered as an urban open space, which made it a more active and more dynamic place. This study is carried out via library and field research with the purpose of finding strategies for taking advantage of useful features of the mosque in traditional urban development. These features include its role as a gathering center for people and city in sustainable urban development. Mosque can be used as a center for enhancing social interactions and creating a sense of association that leads to sustainable social space. These can act as a model which leads us to sustainable cities in terms of social and economic factors.

Keywords: mosque, traditional urban development, sustainable social space, Vakil Mosque, Shiraz

Procedia PDF Downloads 405
2074 Higher Education Internationalisation: The Case of Indonesia

Authors: Agustinus Bandur, Dyah Budiastuti

Abstract:

With the rapid development of information and communication technology (ICT) in globalisation era, higher education (HE) internationalisation has become a worldwide phenomenon. However, even though various studies have been widely published in existing literature, the settings of these studies were taken places in developed countries. Accordingly, the major purpose of this article is to explore the current trends of higher education internationalisation programs with particular reference to identify the benefits and challenges confronted by participating staff and students. For these purposes, ethnographic qualitative study with the usage of NVivo 11 software was applied in coding, analyzing, and visualization of non-numeric data gathered from interviews, videos, web contents, social media, and relevant documents. Purposive sampling technique was applied in this study with a total of ten high-ranked accredited government and private universities in Indonesia. On the basis of thematic and cross-case analyses, this study indicates that while Australia has led other countries in dual-degree programs, partner universities from Japan and Korea have the most frequent collaboration on student exchange programs. Meanwhile, most visiting scholars who have collaborated with the universities in this study came from the US, the UK, Japan, Australia, Netherlands, and China. Other European countries such as Germany, French, and Norway have also conducted joint research with Indonesian universities involved in this study. This study suggests that further supports of government policy and grants are required to overcome the challenges as well as strategic leadership and management roles to achieve high impacts of such programs on higher education quality.

Keywords: higher education, internationalisation, challenges, Indonesia

Procedia PDF Downloads 270
2073 Eco-Innovation: Perspectives from a Theoretical Approach and Policy Analysis

Authors: Natasha Hazarika, Xiaoling Zhang

Abstract:

Eco- innovations, unlike regular innovations, are not self-enforcing and are associated with the double externality problem. Therefore, it is emphasized that eco-innovations need government. intervention in the form of supportive policies on priority. Off late, factors like consumer demand, technological advancement as well as the competitiveness of the firms have been considered as equally important. However, the interaction among these driving forces has not been fully traced out. Also, the theory on eco-innovation is found to be at a nascent stage which does not resonate with its dynamics as it is traditionally studied under the neo- classical economics theory. Therefore, to begin with, insights for this research have been derived from the merits of ‘neo- classical economics’, ‘evolutionary approach’, and the ‘resource based view’ which revealed the issues pertaining to technological system lock- ins and firm- based capacities which usually remained undefined by the neo classical approach; it would be followed by determining how the policies (in the national level) and their instruments are designed in order to motivate firms to eco-innovate, by analyzing the innovation ‘friendliness’ of the policy style and the policy instruments as per the indicators provided in innovation literature by means of document review (content analysis) of the relevant policies introduced by the Chinese government. The significance of theoretical analysis lies in its ability to show why certain practices become dominant irrespective of gains or losses, and that of the policy analysis lies in its ability to demonstrate the credibility of govt.’s sticks, carrots and sermons for eco-innovation.

Keywords: firm competency, eco-innovation, policy, theory

Procedia PDF Downloads 181