Search results for: downward force
1728 Measurement of Innovation Performance
Authors: M. Chobotová, Ž. Rylková
Abstract:
Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.Keywords: innovation, performance, measurements metrics, indices
Procedia PDF Downloads 3751727 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc
Abstract:
Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.Keywords: numerical model, additive manufacturing, friction, process
Procedia PDF Downloads 1471726 Comparison of Transforming Growth Factor-β1 Levels in the Human Gingival Sulcus during Canine Retraction Using Elastic Chain and Closed Coil Spring
Authors: Sri Suparwitri
Abstract:
When an orthodontic force is applied to a tooth, an inflammatory response is initiated then lead to bone remodeling process, and the process accommodates tooth movement. One of cytokine that plays a prominent role in bone remodeling process was transforming growth factor-beta 1 (TGF-β1). The purpose of this study was to identify and compare changes of TGF-β1 in human gingival crevicular fluid during canine retraction using elastic chain and closed coil spring. Ten patients (mean age 20.7 ± 2.9 years) participated. The patients were entering the space closure phase of fixed orthodontic treatment. An upper canine of each patient was retracted using elastic chain, and the contralateral canine was retracted using closed coil spring. Gingival crevicular fluid samples were collected from the canine teeth before and 7 days after the force was applied. Transforming growth factor-beta 1 was determined by enzyme-linked immunosorbent assay (ELISA). The concentrations of TGF-β1 at 7 days were significantly higher compared to before canine retraction in both groups. In the evaluation of between-group difference, before retraction, the difference was insignificant, whereas at 7 days significantly higher values were determined in the closed coil spring group compared to elastic chain group. The result suggests that TGF-β1 is associated with the bone remodeling that occurs during canine distalization movement. Closed coil spring gave higher TGF-β1 concentrations thus more bone remodeling occurred and may be considered the treatment of choice.Keywords: closed coil spring, elastic chain, gingival crevicular fluid, TGF-β1
Procedia PDF Downloads 1701725 Simple and Effective Method of Lubrication and Wear Protection
Authors: Buddha Ratna Shrestha, Jimmy Faivre, Xavier Banquy
Abstract:
By precisely controlling the molecular interactions between anti-wear macromolecules and bottle-brush lubricating molecules in the solution state, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. Our results provide rational guides to design such fluids for virtually any type of surfaces. The lowest friction coefficient and the maximum pressure that it can sustain is 5*10-3 and 2.5 MPa which is close to the physiological pressure. Lubricating and protecting surfaces against wear using liquid lubricants is a great technological challenge. Until now, wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface while lubrication was provided by a lubricating fluid. Hence, we here research for a simple, effective and applicable solution to the above problem using surface force apparatus (SFA). SFA is a powerful technique with sub-angstrom resolution in distance and 10 nN/m resolution in interaction force while performing friction experiment. Thus, SFA is used to have the direct insight into interaction force, material and friction at interface. Also, we always know the exact contact area. From our experiments, we found that by precisely controlling the molecular interactions between anti-wear macromolecules and lubricating molecules, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. The lowest friction coefficient and the maximum pressure that it can sustain in our system is 5*10-3 and 2.5 GPA which is well above the physiological pressure. Our results provide rational guides to design such fluids for virtually any type of surfaces. Most importantly this process is simple, effective and applicable method of lubrication and protection as until now wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface. Currently, the frictional data that are obtained while sliding the flat mica surfaces are compared and confirmed that a particular mixture of solution was found to surpass all other combination. So, further we would like to confirm that the lubricating and antiwear protection remains the same by performing the friction experiments in synthetic cartilages.Keywords: bottle brush polymer, hyaluronic acid, lubrication, tribology
Procedia PDF Downloads 2631724 The Role of Semi Open Spaces on Exploitation of Wind-Driven Ventilation
Authors: Paria Saadatjoo
Abstract:
Given that HVAC systems are the main sources of carbon dioxide producers, developing ways to reduce dependence on these systems and making use of natural resources is too important to achieve environmentally friendly buildings. A major part of building potential in terms of using natural energy resources depends on its physical features. So architectural decisions at the first step of the design process can influence the building's energy efficiency significantly. Implementation of semi-open spaces into solid apartment blocks inspired by the concept of courtyard in ancient buildings as a passive cooling strategy is currently enjoying great popularity. However, the analysis of these features and their effect on wind behavior at initial design steps is a difficult task for architects. The main objective of this research was to investigate the influence of semi-open to closed space ratio on airflow patterns in and around midrise buildings and introduce the best ratio in terms of harnessing natural ventilation. The main strategy of this paper was semi-experimental, and the research methodology was descriptive statistics. At the first step, by changing the terrace area, 6 models with various open to closed space ratios were created. These forms were then transferred to CFD software to calculate the primary indicators of natural ventilation potentials such as wind force coefficient, air flow rate, age of air distribution, etc. Investigations indicated that modifying the terrace area and, in other words, the open to closed space ratio influenced the wind force coefficient, airflow rate, and age of air distribution.Keywords: natural ventilation, wind, midrise, open space, energy
Procedia PDF Downloads 1701723 Kinematical Analysis of Tai Chi Chuan Players during Gait and Balance Test and Implication in Rehabilitation Exercise
Authors: Bijad Alqahtani, Graham Arnold, Weijie Wang
Abstract:
Background—Tai Chi Chuan (TCC) is a type of traditional Chinese martial art and is considered a benefiting physical fitness. Advanced techniques of motion analysis have been routinely used in the clinical assessment. However, so far, little research has been done on the biomechanical assessment of TCC players in terms of gait and balance using motion analysis. Objectives—The aim of this study was to investigate whether TCC improves the lower limb conditions and balance ability using the state of the art motion analysis technologies, i.e. motion capture system, electromyography and force platform. Methods—Twenty TCC (9 male, 11 female) with age between (42-77) years old and weight (56.2-119 Kg), and eighteen Non-TCC participants (7 male, 11 female), weight (50-110 Kg) with age (43- 78) years old at the matched age as a control group were recruited in this study. Their gait and balance were collected using Vicon Nexus® to obtain the gait parameters, and kinematic parameters of hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 5 trials of single-leg balance for the dominant side. Also, the participants performed 3 trials of four square step balance and 10 trials of walking. From the recorded trials, three good ones were analyzed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g. walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Result— The temporal-spatial variables of TCC subjects were compared with the Non-TCC subjects, it was found that there was a significant difference (p < 0.05) between the groups. Moreover, it was observed that participants of TCC have significant differences in ankle, hip, and knee joints’ kinematics in the sagittal, coronal, and transverse planes such as ankle angle (19.90±19.54 deg) for TCC while (15.34±6.50 deg) for Non-TCC, and knee angle (14.96±6.40 deg) for TCC while (17.63±5.79 deg) for Non-TCC in the transverse plane. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g. maintaining single leg stance time in the TCC participants showed longer duration (20.85±10.53 s) in compared to Non-TCC people group (13.39±8.78 s). While the result showed that there was no significant difference between groups in the four square step balance. Conclusion—Our result showed that there are significant differences between Tai Chi Chuan and Non-Tai Chi Chuan participants in the various aspects of gait analysis and balance test, as a consequence of these findings some of biomechanical parameters such as joints kinematics, gait parameters and single leg stance balance test, the Tai Chi Chuan could improve the lower limb conditions and could reduce a risk of fall for the elderly with ageing.Keywords: gait analysis, kinematics, single leg stance, Tai Chi Chuan
Procedia PDF Downloads 1271722 Reducing Ambulance Offload Delay: A Quality Improvement Project at Princess Royal University Hospital
Authors: Fergus Wade, Jasmine Makker, Matthew Jankinson, Aminah Qamar, Gemma Morrelli, Shayan Shah
Abstract:
Background: Ambulance offload delays (AODs) affect patient outcomes. At baseline, the average AOD at Princess Royal University Hospital (PRUH) was 41 minutes, in breach of the 15-minute target. Aims: By February 2023, we aimed to reduce: the average AOD to 30 minutes percentage of AOD >30 minutes (PA30) to 25% and >60 minutes (PA60) to 10% Methods: Following a root-cause analysis, we implemented 2 Plan, Do, Study, Act (PDSA) cycles. PDSA-1 ‘Drop-and-run’: ambulances waiting >15 minutes for a handover left the patients in the Emergency Department (ED) and returned to the community. PDSA-2: Booking in the patients before the handover, allowing direct updates to online records, eliminating the need for handwritten notes. Outcome measures: AOD, PA30, and PA60, and process measures: total ambulances and patients in the ED were recorded for 16 weeks. Results: In PDSA-1, all parameters increased slightly despite unvarying ED crowding. In PDSA-2, two shifts in data were seen: initially, a sharp increase in the outcome measures consistent with increased ED crowding, followed by a downward shift when crowding returned to baseline (p<0.01). Within this interval, the AOD reduced to 29.9 minutes, and PA30 and PA60 were 31.2% and 9.2% respectively. Discussion/conclusion: PDSA-1 didn’t result in any significant changes; lack of compliance was a key cause. The initial upward shift in PDSA-2 is likely associated with NHS staff strikes. However, during the second interval, the AOD and the PA60 met our targets of 30 minutes and 10%, respectively, improving patient flow in the ED. This was sustained without further input and if maintained, saves 2 paramedic shifts every 3 days.Keywords: ambulance offload, district general hospital, handover, quality improvement
Procedia PDF Downloads 1051721 The Quest for Institutional Independence to Advance Police Pluralism in Ethiopia
Authors: Demelash Kassaye Debalkie
Abstract:
The primary objective of this study is to report the tributes that are significantly impeding the Ethiopian police's ability to provide quality services to the people. Policing in Ethiopia started in the medieval period. However, modern policing was introduced instead of vigilantism in the early 1940s. The progress counted since the date police became modernized is, however, under contention when viewed from the standpoint of officers’ development and technologies in the 21st century. The police in Ethiopia are suffering a lot to be set free from any form of political interference by the government and to be loyal to impartiality, equity, and justice in enforcing the law. Moreover, the institutional competence of the police in Ethiopia is currently losing its power derived from the constitution as a legitimate enforcement agency due to the country’s political landscape encouraging ethnic-based politics. According to studies, the impact of ethnic politics has been a significant challenge for police in controlling conflicts between two ethnic groups. The study used qualitative techniques and data was gathered from key informants selected purposely. The findings indicate that governments in the past decades were skeptical about establishing a constitutional police force in the country. This has certainly been one of the challenges of pluralizing the police: building police-community relations based on trust. The study conducted to uncover the obstructions has finally reported that the government’s commitment to form a non-partisan, functionally decentralized, and operationally demilitarized police force is too minimal and appalling. They mainly intend to formulate the missions of the police in accordance with their interests and political will to remain in power. It, therefore, reminds the policymakers, law enforcement officials, and the government in power to revise its policies and working procedures already operational to strengthen the police in Ethiopia based on public participation and engagement.Keywords: community, constitution, Ethiopia, law enforcement
Procedia PDF Downloads 861720 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 1001719 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives
Authors: Alper T. Celebi, Ali Beskok
Abstract:
Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip
Procedia PDF Downloads 1571718 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 1991717 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 891716 Effect of High-Intensity Core Muscle Exercises Training on Sport Performance in Dancers
Authors: Che Hsiu Chen, Su Yun Chen, Hon Wen Cheng
Abstract:
Traditional core stability, core endurance, and balance exercises on a stable surface with isometric muscle actions, low loads, and multiple repetitions, which may not improvements the swimming and running economy performance. However, the effects of high intensity core muscle exercise training on jump height, sprint, and aerobic fitness remain unclear. The purpose of this study was to examine whether high intensity core muscle exercises training could improve sport performances in dancers. Thirty healthy university dancer students (28 women and 2 men; age 20.0 years, height 159.4 cm, body mass 52.7 kg) were voluntarily participated in this study, and each participant underwent five suspension exercises (e.g., hip abduction in plank alternative, hamstring curl, 45-degree row, lunge and oblique crunch). Each type of exercise was performed for 30-second, with 30-second of rest between exercises, two times per week for eight weeks and each exercise session was increased by 10-second every week. We measured agility, explosive force, anaerobic and cardiovascular fitness in dancer performance before and after eight weeks of training. The results showed that the 8-week high intensity core muscle training would significantly increase T-test agility (7.78%), explosive force of acceleration (3.35%), vertical jump height (8.10%), jump power (6.95%), lower extremity anaerobic ability (7.10%) and oxygen uptake efficiency slope (4.15%). Therefore, it can be concluded that eight weeks of high intensity core muscle exercises training can improve not only agility, sprint ability, vertical jump ability, anaerobic and but also cardiovascular fitness measures as well.Keywords: balance, jump height, sprint, maximal oxygen uptake
Procedia PDF Downloads 4071715 Corrosion of Concrete Reinforcing Steel Bars Tested and Compared Between Various Protection Methods
Authors: P. van Tonder, U. Bagdadi, B. M. D. Lario, Z. Masina, T. R. Motshwari
Abstract:
This paper analyses how concrete reinforcing steel bars corrode and how it can be minimised through the use of various protection methods against corrosion, such as metal-based paint, alloying, cathodic protection and electroplating. Samples of carbon steel bars were protected, using these four methods. Tests performed on the samples included durability, electrical resistivity and bond strength. Durability results indicated relatively low corrosion rates for alloying, cathodic protection, electroplating and metal-based paint. The resistivity results indicate all samples experienced a downward trend, despite erratic fluctuations in the data, indicating an inverse relationship between electrical resistivity and corrosion rate. The results indicated lowered bond strengths when the reinforced concrete was cured in seawater compared to being cured in normal water. It also showed that higher design compressive strengths lead to higher bond strengths which can be used to compensate for the loss of bond strength due to corrosion in a real-world application. In terms of implications, all protection methods have the potential to be effective at resisting corrosion in real-world applications, especially the alloying, cathodic protection and electroplating methods. The metal-based paint underperformed by comparison, most likely due to the nature of paint in general which can fade and chip away, revealing the steel samples and exposing them to corrosion. For alloying, stainless steel is the suggested material of choice, where Y-bars are highly recommended as smooth bars have a much-lowered bond strength. Cathodic protection performed the best of all in protecting the sample from corrosion, however, its real-world application would require significant evaluation into the feasibility of such a method.Keywords: protection methods, corrosion, concrete, reinforcing steel bars
Procedia PDF Downloads 1731714 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model
Authors: Fu Jia
Abstract:
The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping
Procedia PDF Downloads 2661713 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels
Authors: Foad Hassaninejadafarahani, Scott Ormiston
Abstract:
Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number
Procedia PDF Downloads 3631712 Experimental Research on Neck Thinning Dynamics of Droplets in Cross Junction Microchannels
Authors: Yilin Ma, Zhaomiao Liu, Xiang Wang, Yan Pang
Abstract:
Microscale droplets play an increasingly important role in various applications, including medical diagnostics, material synthesis, chemical engineering, and cell research due to features of high surface-to-volume ratio and tiny scale, which can significantly improve reaction rates, enhance heat transfer efficiency, enable high-throughput parallel studies as well as reduce reagent usage. As a mature technique to manipulate small amounts of liquids, droplet microfluidics could achieve the precise control of droplet parameters such as size, uniformity, structure, and thus has been widely adopted in the engineering and scientific research of multiple fields. Necking processes of the droplet in the cross junction microchannels are experimentally and theoretically investigated and dynamic mechanisms of the neck thinning in two different regimes are revealed. According to evolutions of the minimum neck width and the thinning rate, the necking process is further divided into different stages and the main driving force during each stage is confirmed. Effects of the flow rates and the cross-sectional aspect ratio on the necking process as well as the neck profile at different stages are provided in detail. The distinct features of the two regimes in the squeezing stage are well captured by the theoretical estimations of the effective flow rate and the variations of the actual flow rates in different channels are reasonably reflected by the channel width ratio. In the collapsing stage, the quantitative relation between the minimum neck width and the remaining time is constructed to identify the physical mechanism.Keywords: cross junction, neck thinning, force analysis, inertial mechanism
Procedia PDF Downloads 1091711 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness
Procedia PDF Downloads 4201710 Dynamic Determination of Spare Engine Requirements for Air Fighters Integrating Feedback of Operational Information
Authors: Tae Bo Jeon
Abstract:
Korean air force is undertaking a big project to replace prevailing hundreds of old air fighters such as F-4, F-5, KF-16 etc. The task is to develop and produce domestic fighters equipped with 2 complete-type engines each. A large number of engines, however, will be purchased as products from a foreign engine maker. In addition to the fighters themselves, secure the proper number of spare engines serves a significant role in maintaining combat readiness and effectively managing the national defense budget due to high cost. In this paper, we presented a model dynamically updating spare engine requirements. Currently, the military administration purchases all the fighters, engines, and spare engines at acquisition stage and does not have additional procurement processes during the life cycle, 30-40 years. With the assumption that procurement procedure during the operational stage is established, our model starts from the initial estimate of spare engine requirements based on limited information. The model then performs military missions and repair/maintenance works when necessary. During operation, detailed field information - aircraft repair and test, engine repair, planned maintenance, administration time, transportation pipeline between base, field, and depot etc., - should be considered for actual engine requirements. At the end of each year, the performance measure is recorded and proceeds to next year when it shows higher the threshold set. Otherwise, additional engine(s) will be bought and added to the current system. We repeat the process for the life cycle period and compare the results. The proposed model is seen to generate far better results appropriately adding spare engines thus avoiding possible undesirable situations. Our model may well be applied to future air force military operations.Keywords: DMSMS, operational availability, METRIC, PRS
Procedia PDF Downloads 1711709 Vibration Control of a Horizontally Supported Rotor System by Using a Radial Active Magnetic Bearing
Authors: Vishnu A., Ashesh Saha
Abstract:
The operation of high-speed rotating machinery in industries is accompanied by rotor vibrations due to many factors. One of the primary instability mechanisms in a rotor system is the centrifugal force induced due to the eccentricity of the center of mass away from the center of rotation. These unwanted vibrations may lead to catastrophic fatigue failure. So, there is a need to control these rotor vibrations. In this work, control of rotor vibrations by using a 4-pole Radial Active Magnetic Bearing (RAMB) as an actuator is analysed. A continuous rotor system model is considered for the analysis. Several important factors, like the gyroscopic effect and rotary inertia of the shaft and disc, are incorporated into this model. The large deflection of the shaft and the restriction to axial motion of the shaft at the bearings result in nonlinearities in the system governing equation. The rotor system is modeled in such a way that the system dynamics can be related to the geometric and material properties of the shaft and disc. The mathematical model of the rotor system is developed by incorporating the control forces generated by the RAMB. A simple PD controller is used for the attenuation of system vibrations. An analytical expression for the amplitude and phase equations is derived using the Method of Multiple Scales (MMS). Analytical results are verified with the numerical results obtained using an ‘ode’ solver in-built into MATLAB Software. The control force is found to be effective in attenuating the system vibrations. The multi-valued solutions leading to the jump phenomenon are also eliminated with a proper choice of control gains. Most interestingly, the shape of the backbone curves can also be altered for certain values of control parameters.Keywords: rotor dynamics, continuous rotor system model, active magnetic bearing, PD controller, method of multiple scales, backbone curve
Procedia PDF Downloads 791708 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model
Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin
Abstract:
The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model
Procedia PDF Downloads 4671707 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures
Authors: Jelena R. Pejović, Nina N. Serdar
Abstract:
This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.Keywords: ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building
Procedia PDF Downloads 3661706 Advanced Biosensor Characterization of Phage-Mediated Lysis in Real-Time and under Native Conditions
Authors: Radka Obořilová, Hana Šimečková, Matěj Pastucha, Jan Přibyl, Petr Skládal, Ivana Mašlaňová, Zdeněk Farka
Abstract:
Due to the spreading of antimicrobial resistance, alternative approaches to combat superinfections are being sought, both in the field of lysing agents and methods for studying bacterial lysis. A suitable alternative to antibiotics is phage therapy and enzybiotics, for which it is also necessary to study the mechanism of their action. Biosensor-based techniques allow rapid detection of pathogens in real time, verification of sensitivity to commonly used antimicrobial agents, and selection of suitable lysis agents. The detection of lysis takes place on the surface of the biosensor with immobilized bacteria, which has the potential to be used to study biofilms. An example of such a biosensor is surface plasmon resonance (SPR), which records the kinetics of bacterial lysis based on a change in the resonance angle. The bacteria are immobilized on the surface of the SPR chip, and the action of phage as the mass loss is monitored after a typical lytic cycle delay. Atomic force microscopy (AFM) is a technique for imaging of samples on the surface. In contrast to electron microscopy, it has the advantage of real-time imaging in the native conditions of the nutrient medium. In our case, Staphylococcus aureus was lysed using the enzyme lysostaphin and phage P68 from the familyPodoviridae at 37 ° C. In addition to visualization, AFM was used to study changes in mechanical properties during lysis, which resulted in a reduction of Young’s modulus (E) after disruption of the bacterial wall. Changes in E reflect the stiffness of the bacterium. These advanced methods provide deeper insight into bacterial lysis and can help to fight against bacterial diseases.Keywords: biosensors, atomic force microscopy, surface plasmon resonance, bacterial lysis, staphylococcus aureus, phage P68
Procedia PDF Downloads 1341705 The Evolution of Strike and Intelligence Functions in Special Operations Forces
Authors: John Hardy
Abstract:
The expansion of special operations forces (SOF) in the twenty-first century is often discussed in terms of the size and disposition of SOF units. Research regarding the number SOF personnel, the equipment SOF units procure, and the variety of roles and mission that SOF fulfill in contemporary conflicts paints a fascinating picture of changing expectations for the use of force. A strong indicator of the changing nature of SOF in contemporary conflicts is the fusion of strike and intelligence functions in the SOF in many countries. What were once more distinct roles on the kind of battlefield generally associated with the concept of conventional warfare have become intermingled in the era of persistent conflict which SOF face. This study presents a historical analysis of the co-evolution of the intelligence and direct action functions carried out by SOF in counterterrorism, counterinsurgency, and training and mentoring missions between 2004 and 2016. The study focuses primarily on innovation in the US military and the diffusion of key concepts to US allies first, and then more broadly afterward. The findings show that there were three key phases of evolution throughout the period of study, each coinciding with a process of innovation and doctrinal adaptation. The first phase was characterized by the fusion of intelligence at the tactical and operational levels. The second phase was characterized by the industrial counterterrorism campaigns used by US SOF against irregular enemies in Iraq and Afghanistan. The third phase was characterized by increasing forward collection of actionable intelligence by SOF force elements in the course of direct action raids. The evolution of strike and intelligence functions in SOF operations between 2004 and 2016 was significantly influenced by reciprocity. Intelligence fusion led to more effective targeting, which then increased intelligence collection. Strike and intelligence functions were then enhanced by greater emphasis on intelligence exploitation during operations, which further increased the effectiveness of both strike and intelligence operations.Keywords: counterinsurgency, counterterrorism, intelligence, irregular warfare, military operations, special operations forces
Procedia PDF Downloads 2681704 All-Optical Gamma-Rays and Positrons Source by Ultra-Intense Laser Irradiating an Al Cone
Authors: T. P. Yu, J. J. Liu, X. L. Zhu, Y. Yin, W. Q. Wang, J. M. Ouyang, F. Q. Shao
Abstract:
A strong electromagnetic field with E>1015V/m can be supplied by an intense laser such as ELI and HiPER in the near future. Exposing in such a strong laser field, laser-matter interaction enters into the near quantum electrodynamics (QED) regime and highly non-linear physics may occur during the laser-matter interaction. Recently, the multi-photon Breit-Wheeler (BW) process attracts increasing attention because it is capable to produce abundant positrons and it enhances the positron generation efficiency significantly. Here, we propose an all-optical scheme for bright gamma rays and dense positrons generation by irradiating a 1022 W/cm2 laser pulse onto an Al cone filled with near-critical-density plasmas. Two-dimensional (2D) QED particle-in-cell (PIC) simulations show that, the radiation damping force becomes large enough to compensate for the Lorentz force in the cone, causing radiation-reaction trapping of a dense electron bunch in the laser field. The trapped electrons oscillate in the laser electric field and emits high-energy gamma photons in two ways: (1) nonlinear Compton scattering due to the oscillation of electrons in the laser fields, and (2) Compton backwardscattering resulting from the bunch colliding with the reflected laser by the cone tip. The multi-photon Breit-Wheeler process is thus initiated and abundant electron-positron pairs are generated with a positron density ~1027m-3. The scheme is finally demonstrated by full 3D PIC simulations, which indicate the positron flux is up to 109. This compact gamma ray and positron source may have promising applications in future.Keywords: BW process, electron-positron pairs, gamma rays emission, ultra-intense laser
Procedia PDF Downloads 2601703 The Simulation of Superfine Animal Fibre Fractionation: The Strength Variation of Fibre
Authors: Sepehr Moradi
Abstract:
This study investigates the contribution of individual Australian Superfine Merino Wool (ASFW) and Inner Mongolia Cashmere (IMC) fibres strength behaviour to the breaking force variation (CVBF) and minimum fibre diameter (CVₘFD) induced by actual single fibre lengths and the combination of length and diameter groups. Mid-side samples were selected for the ASFW (n = 919) and IMC (n = 691) since it is assumed to represent the average of the whole fleece. The average (LₘFD) varied for ASFW and IMC by 36.6 % and 33.3 % from shortest to longest actual single fibre length and -21.2 % and -21.7 % between longest-coarsest and shortest-finest groups, respectively. The tensile properties of single animal fibres were characterised using Single Fibre Analyser (SIFAN 4). After normalising for diversity in fibre diameter at the position of breakage, the parameters, which explain the strength behaviour within actual fibre lengths and combination of length-diameter groups, were the Intrinsic Fibre Strength (IFS) (MPa), Min IFS (MPa), Max IFS (MPa) and Breaking force (BF) (cN). The average strength of single fibres varied extensively within actual length groups and within a combination of length-diameter groups. IFS ranged for ASFW and IMC from 419 to 355 MPa (-15.2 % range) and 353 to 319 (-9.6 % range) and BF from 2.2 to 3.6 (63.6 % range) and 3.2 to 5.3 cN (65.6 % range) from shortest to longest groups, respectively. Single fibre properties showed no differences within actual length groups and within a combination of length-diameter groups, or was there a strong interaction between the strength of single fibre (P > 0.05) within remaining and removing length-diameter groups. Longer-coarser fibre fractionation had a significant effect on BF and IFS and all of the length groups showed a considerable variance in single fibre strength that is accounted for by diversity in the diameter variation along the fibre. There are many concepts for the improvement of the stress-strain properties of animal fibres as a means of raising a single fibre strength by simultaneous changes in fibre length and diameter. Fibre fractionation over a given length directly for single fibre strength or using the variation traits of fibre diameter is an important process used to increase the strength of the single fibre.Keywords: single animal fibre fractionation, actual length groups, strength variation, length-diameter groups, diameter variation along fibre
Procedia PDF Downloads 2031702 Dogmatic Analysis of Legal Risks of Using Artificial Intelligence: The European Union and Polish Perspective
Authors: Marianna Iaroslavska
Abstract:
ChatGPT is becoming commonplace. However, only a few people think about the legal risks of using Large Language Model in their daily work. The main dilemmas concern the following areas: who owns the copyright to what somebody creates through ChatGPT; what can OpenAI do with the prompt you enter; can you accidentally infringe on another creator's rights through ChatGPT; what about the protection of the data somebody enters into the chat. This paper will present these and other legal risks of using large language models at work using dogmatic methods and case studies. The paper will present a legal analysis of AI risks against the background of European Union law and Polish law. This analysis will answer questions about how to protect data, how to make sure you do not violate copyright, and what is at stake with the AI Act, which recently came into force in the EU. If your work is related to the EU area, and you use AI in your work, this paper will be a real goldmine for you. The copyright law in force in Poland does not protect your rights to a work that is created with the help of AI. So if you start selling such a work, you may face two main problems. First, someone may steal your work, and you will not be entitled to any protection because work created with AI does not have any legal protection. Second, the AI may have created the work by infringing on another person's copyright, so they will be able to claim damages from you. In addition, the EU's current AI Act imposes a number of additional obligations related to the use of large language models. The AI Act divides artificial intelligence into four risk levels and imposes different requirements depending on the level of risk. The EU regulation is aimed primarily at those developing and marketing artificial intelligence systems in the EU market. In addition to the above obstacles, personal data protection comes into play, which is very strictly regulated in the EU. If you violate personal data by entering information into ChatGPT, you will be liable for violations. When using AI within the EU or in cooperation with entities located in the EU, you have to take into account a lot of risks. This paper will highlight such risks and explain how they can be avoided.Keywords: EU, AI act, copyright, polish law, LLM
Procedia PDF Downloads 201701 Trends of Seasonal and Annual Rainfall in the South-Central Climatic Zone of Bangladesh Using Mann-Kendall Trend Test
Authors: M. T. Islam, S. H. Shakif, R. Hasan, S. H. Kobi
Abstract:
Investigation of rainfall trends is crucial considering climate change, food security, and the economy of a particular region. This research aims to study seasonal and annual precipitation trends and their abrupt changes over time in the south-central climatic zone of Bangladesh using monthly time series data of 50 years (1970-2019). A trend-free pre-whitening method has been employed to make necessary adjustments for autocorrelations in the rainfall data. Trends in rainfall and their intensity have been observed using the non-parametric Mann-Kendall test and Theil-Sen estimator. Significant changes and fluctuation points in the data series have been detected using the sequential Mann-Kendall test at the 95% confidence limit. The study findings show that most of the rainfall stations in the study area have a decreasing precipitation pattern throughout all seasons. The maximum decline in the rainfall intensity has been found for the Tangail station (-8.24 mm/year) during monsoon. Madaripur and Chandpur stations have shown slight positive trends in post-monsoon rainfall. In terms of annual precipitation, a negative rainfall pattern has been identified in each station, with a maximum decrement (-) of 14.48 mm/year at Chandpur. However, all the trends are statistically non-significant within the 95% confidence interval, and their monotonic association with time ranges from very weak to weak. From the sequential Mann-Kendall test, the year of changing points for annual and seasonal downward precipitation trends occur mostly after the 90s for Dhaka and Barishal stations. For Chandpur, the fluctuation points arrive after the mid-70s in most cases.Keywords: trend analysis, Mann-Kendall test, Theil-Sen estimator, sequential Mann-Kendall test, rainfall trend
Procedia PDF Downloads 801700 Research on Health Emergency Management Based on the Bibliometrics
Authors: Meng-Na Dai, Bao-Fang Wen, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Chang-Hai Tang, Zhi-Qiang Feng, Wen-Qiang Yin
Abstract:
Based on the analysis of literature in the health emergency management in China with recent 10 years, this paper discusses the Chinese current research hotspots, development trends and shortcomings in this field, and provides references for scholars to conduct follow-up research. CNKI(China National Knowledge Infrastructure), Weipu, and Wanfang were the databases of this literature. The key words during the database search were health, emergency, and management with the time from 2009 to 2018. The duplicate, non-academic, and unrelated documents were excluded. 901 articles were included in the literature review database. The main indicators of abstraction were, the number of articles published every year, authors, institutions, periodicals, etc. There are some research findings through the analysis of the literature. Overall, the number of literature in the health emergency management in China has shown a fluctuating downward trend in recent 10 years. Specifically, there is a lack of close cooperation between authors, which has not constituted the core team among them yet. Meanwhile, in this field, the number of high-level periodicals and quality literature is scarce. In addition, there are a lot of research hotspots, such as emergency management system, mechanism research, capacity evaluation index system research, plans and capacity-building research, etc. In the future, we should increase the scientific research funding of the health emergency management, encourage collaborative innovation among authors in multi-disciplinary fields, and create high-quality and high-impact journals in this field. The states should encourage scholars in this field to carry out more academic cooperation and communication with the whole world and improve the research in breadth and depth. Generally speaking, the research in health emergency management in China is still insufficient and needs to be improved.Keywords: health emergency management, research situation, bibliometrics, literature
Procedia PDF Downloads 1371699 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings
Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada
Abstract:
The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code
Procedia PDF Downloads 155