Search results for: structural dynamic modification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8754

Search results for: structural dynamic modification

894 Optimized Integration Of Bidirectional Charging Capacities As Mobile Energy Storages

Authors: Luzie Krings, Sven Liebehentze, Maximilian Gehring, Uwe Rüppel

Abstract:

The integration of renewable energy into the energy grid is essential for decarbonization, and leveraging electrified vehicles (EVs) as mobile storage units offers a pathway to address grid challenges. The decentralized nature of EVs and the intermittency of renewable energy sources, such as photovoltaic (PV) and wind power, complicate grid stability. Vehicle-to-Grid (V2G) technology presents a promising solution, enabling EVs to support grid stability through services like redispatch, congestion mitigation, and enhanced renewable energy utilization. Freight transport, contributing 38% of transport emissions, holds significant potential as its aggregated energy storage capacity can stabilize the grid and optimize renewable energy integration. This study introduces a risk-averse optimization model for marketing EV flexibilities in Germany’s energy markets, with a strong focus on improving grid stability and maximizing renewable energy potential. Using a linear optimization framework, the model incorporates technical, regulatory, and operational constraints to simulate EV fleets as scalable energy storage solutions. The integration of proprietary PV and wind energy systems is also modeled to evaluate benefits. Benchmarks compare bidirectional charging with unidirectional charging under dynamic tariffs. The methodology employs the Python-based energypilot tool to optimize participation in Day-Ahead, Intraday, and Redispatch markets, accounting for trading conditions and temporal offsets. Results demonstrate that redispatch utilization substantially supports grid stability, while bidirectional charging increased renewable energy integration by 15% and economic benefits by 20%. Longer charging cycles offered greater financial returns compared to fragmented cycles, emphasizing the potential of fleets with extended idle periods for storing renewable energy. This research highlights the critical role of EVs in stabilizing the grid and utilizing renewable energy effectively by expanding storage capacity. The optimization framework addresses key challenges in energy trading, offering a transferable methodology for broader energy storage applications. This supports the transition to a sustainable energy system by improving environmental outcomes and economic incentives.

Keywords: Electric Vehicles, Energy Grid, Energy Storages, Redispatch

Procedia PDF Downloads 11
893 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 225
892 Sound Selection for Gesture Sonification and Manipulation of Virtual Objects

Authors: Benjamin Bressolette, S´ebastien Denjean, Vincent Roussarie, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland-Martinet

Abstract:

New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task.

Keywords: auditory feedback, gesture sonification, sound perception, virtual object

Procedia PDF Downloads 302
891 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 257
890 Assessment of Influence of Short-Lasting Whole-Body Vibration on the Proprioception of Lower Limbs

Authors: Sebastian Wójtowicz, Anna Mosiołek, Anna Słupik, Zbigniew Wroński, Dariusz Białoszewski

Abstract:

Introduction: In whole-body vibration (WBV) high-frequency mechanical stimuli is generated by a vibration plate and is transferred through bone, muscle and connective tissues to the whole body. The research has shown that the implementation of a vibration plate training over a long period of time leads to improvement of neuromuscular facilitation, especially in afferent neural pathways, which are responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance, and proprioception. The vibration stimulus is suggested to briefly inhibit the conduction of afferent signals from proprioceptors and may hinder the maintenance of body balance. The purpose of this study was to evaluate the result of a single set of exercises connected with whole-body vibration on the proprioception. Material and Methods: The study enrolled 60 people aged 19-24 years. These individuals were divided into a test group (group A) and a control group (group B). Both groups consisted of 30 persons and performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the group A. The vibration plate was turned off while the control group did their exercises. All participants performed six dynamic 30-seconds-long exercises with a 60-second resting period between them. Large muscle groups of the trunk, pelvis, and lower limbs were involved while taking the exercises. The results were measured before and immediately after the exercises. The proprioception of lower limbs was measured in a closed kinematic chain using a Humac 360®. Participants were instructed to perform three squats with biofeedback in a defined range of motion. Then they did three squats without biofeedback which were measured. The final result was the average of three measurements. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p > 0.05). The proprioception did not change in both the group A and the group B. Conclusions: 1. Deterioration in proprioception was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can only have a short-lasting effect occurring only in the presence of the vibration stimulus. 2. Short-term use of vibration seems to be safe for patients with proprioceptive impairment due to the fact that the treatment does not decrease proprioception. 3. There is a need for supplementing the results with evaluation of proprioception while vibration stimuli are being applied. Moreover, the effects of vibration parameters used in the exercises should be evaluated.

Keywords: joint position sense, proprioception, squat, whole body vibration

Procedia PDF Downloads 467
889 An Empirical Study for the Data-Driven Digital Transformation of the Indian Telecommunication Service Providers

Authors: S. Jigna, K. Nanda Kumar, T. Anna

Abstract:

Being a major contributor to the Indian economy and a critical facilitator for the country’s digital India vision, the Indian telecommunications industry is also a major source of employment for the country. Since the last few years, the Indian telecommunication service providers (TSPs), however, are facing business challenges related to increasing competition, losses, debts, and decreasing revenue. The strategic use of digital technologies for a successful digital transformation has the potential to equip organizations to meet these business challenges. Despite an increased focus on digital transformation, the telecom service providers globally, including Indian TSPs, have seen limited success so far. The purpose of this research was thus to identify the factors that are critical for the digital transformation and to what extent they influence the successful digital transformation of the Indian TSPs. The literature review of more than 300 digital transformation-related articles, mostly from 2013-2019, demonstrated a lack of an empirical model consisting of factors for the successful digital transformation of the TSPs. This study theorizes a research framework grounded in multiple theories, and a research model consisting of 7 constructs that may be influencing business success during the digital transformation of the organization was proposed. The questionnaire survey of senior managers in the Indian telecommunications industry was seeking to validate the research model. Based on 294 survey responses, the validation of the Structural equation model using the statistical tool ADANCO 2.1.1 was found to be robust. Results indicate that Digital Capabilities, Digital Strategy, and Corporate Level Data Strategy in that order has a strong influence on the successful Business Performance, followed by IT Function Transformation, Digital Innovation, and Transformation Management respectively. Even though Digital Organization did not have a direct significance on Business Performance outcomes, it had a strong influence on IT Function Transformation, thus affecting the Business Performance outcomes indirectly. Amongst numerous practical and theoretical contributions of the study, the main contribution for the Indian TSPs is a validated reference for prioritizing the transformation initiatives in their strategic roadmap. Also, the main contribution to the theory is the possibility to use the research framework artifact of the present research for quantitative validation in different industries and geographies.

Keywords: corporate level data strategy, digital capabilities, digital innovation, digital strategy

Procedia PDF Downloads 130
888 Awarding Copyright Protection to Artificial Intelligence Technology for its Original Works: The New Way Forward

Authors: Vibhuti Amarnath Madhu Agrawal

Abstract:

Artificial Intelligence (AI) and Intellectual Property are two emerging concepts that are growing at a fast pace and have the potential of having a huge impact on the economy in the coming times. In simple words, AI is nothing but work done by a machine without any human intervention. It is a coded software embedded in a machine, which over a period of time, develops its own intelligence and begins to take its own decisions and judgments by studying various patterns of how people think, react to situations and perform tasks, among others. Intellectual Property, especially Copyright Law, on the other hand, protects the rights of individuals and Companies in content creation that primarily deals with application of intellect, originality and expression of the same in some tangible form. According to some of the reports shared by the media lately, ChatGPT, an AI powered Chatbot, has been involved in the creation of a wide variety of original content, including but not limited to essays, emails, plays and poetry. Besides, there have been instances wherein AI technology has given creative inputs for background, lights and costumes, among others, for films. Copyright Law offers protection to all of these different kinds of content and much more. Considering the two key parameters of Copyright – application of intellect and originality, the question, therefore, arises that will awarding Copyright protection to a person who has not directly invested his / her intellect in the creation of that content go against the basic spirit of Copyright laws? This study aims to analyze the current scenario and provide answers to the following questions: a. If the content generated by AI technology satisfies the basic criteria of originality and expression in a tangible form, why should such content be denied protection in the name of its creator, i.e., the specific AI tool / technology? B. Considering the increasing role and development of AI technology in our lives, should it be given the status of a ‘Legal Person’ in law? C. If yes, what should be the modalities of awarding protection to works of such Legal Person and management of the same? Considering the current trends and the pace at which AI is advancing, it is not very far when AI will start functioning autonomously in the creation of new works. Current data and opinions on this issue globally reflect that they are divided and lack uniformity. In order to fill in the existing gaps, data obtained from Copyright offices from the top economies of the world have been analyzed. The role and functioning of various Copyright Societies in these countries has been studied in detail. This paper provides a roadmap that can be adopted to satisfy various objectives, constraints and dynamic conditions related AI technology and its protection under Copyright Law.

Keywords: artificial intelligence technology, copyright law, copyright societies, intellectual property

Procedia PDF Downloads 71
887 Exploring the Impact of Domestic Credit Extension, Government Claims, Inflation, Exchange Rates, and Interest Rates on Manufacturing Output: A Financial Analysis.

Authors: Ojo Johnson Adelakun

Abstract:

This study explores the long-term relationships between manufacturing output (MO) and several economic determinants, interest rate (IR), inflation rate (INF), exchange rate (EX), credit to the private sector (CPSM), gross claims on the government sector (GCGS), using monthly data from March 1966 to December 2023. Employing advanced econometric techniques including Fully Modified Ordinary Least Squares (FMOLS), Dynamic Ordinary Least Squares (DOLS), and Canonical Cointegrating Regression (CCR), the analysis provides several key insights. The findings reveal a positive association between interest rates and manufacturing output, which diverges from traditional economic theory that predicts a negative correlation due to increased borrowing costs. This outcome is attributed to the financial resilience of large enterprises, allowing them to sustain investment in production despite higher interest rates. In addition, inflation demonstrates a positive relationship with manufacturing output, suggesting that stable inflation within target ranges creates a favourable environment for investment in productivity-enhancing technologies. Conversely, the exchange rate shows a negative relationship with manufacturing output, reflecting the adverse effects of currency depreciation on the cost of imported raw materials. The negative impact of CPSM underscores the importance of directing credit efficiently towards productive sectors rather than speculative ventures. Moreover, increased government borrowing appears to crowd out private sector credit, negatively affecting manufacturing output. Overall, the study highlights the need for a coordinated policy approach integrating monetary, fiscal, and financial sector strategies. Policymakers should account for the differential impacts of interest rates, inflation, exchange rates, and credit allocation on various sectors. Ensuring stable inflation, efficient credit distribution, and mitigating exchange rate volatility are critical for supporting manufacturing output and promoting sustainable economic growth. This research provides valuable insights into the economic dynamics influencing manufacturing output and offers policy recommendations tailored to South Africa’s economic context.

Keywords: domestic credit, government claims, financial variables, manufacturing output, financial analysis

Procedia PDF Downloads 20
886 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties

Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana

Abstract:

The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.

Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae

Procedia PDF Downloads 198
885 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz

Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas

Abstract:

This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.

Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D

Procedia PDF Downloads 211
884 The Decline of Islamic Influence in the Global Geopolitics

Authors: M. S. Riyazulla

Abstract:

Since the dawn of the 21st century, there has been a perceptible decline in Islamic supremacy in world affairs, apart from the gradual waning of the amiable relations and relevance of Islamic countries in the International political arena. For a long, Islamic countries have been marginalised by the superpowers in the global conflicting issues. This was evident in the context of their recent invasions and interference in Afghanistan, Syria, Iraq, and Libya. The leading International Islamic organizations like the Arab League, Organization of Islamic Cooperation, Gulf Cooperation Council, and Muslim World League did not play any prominent role there in resolving the crisis that ensued due to the exogenous and endogenous causes. Hence, there is a need for Islamic countries to create a credible International Islamic organization that could dictate its terms and shape a new Islamic world order. The prominent Islamic countries are divided on ideological and religious fault lines. Their concord is indispensable to enhance their image and placate the relations with other countries and communities. The massive boon of oil and gas could be synergistically utilised to exhibit their omnipotence and eminence through constructive ways. The prevailing menace of Islamophobia could be abated through syncretic messages, discussions, and deliberations by the sagacious Islamic scholars with the other community leaders. Presently, as Muslims are at a crossroads, a dynamic leadership could navigate the agitated Muslim community on the constructive path and herald political stability around the world. The present political disorder, chaos, and economic challenges necessities a paradigm shift in approach to worldly affairs. This could also be accomplished through the advancement in science and technology, particularly space exploration, for peaceful purposes. The Islamic world, in order to regain its lost preeminence, should rise to the occasion in promoting peace and tranquility in the world and should evolve a rational and human-centric solution to global disputes and concerns. As a splendid contribution to humanity and for amicable international relations, they should devote all their resources and scientific intellect towards space exploration and should safely transport man from the Earth to the nearest and most accessible cosmic body, the Moon, within one hundred years as the mankind is facing the existential threat on the planet.

Keywords: carboniferous period, Earth, extinction, fossil fuels, global leaders, Islamic glory, international order, life, marginalization, Moon, natural catastrophes

Procedia PDF Downloads 68
883 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 212
882 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 138
881 The Effect of Seated Distance on Muscle Activation and Joint Kinematics during Seated Strengthening in Patients with Stroke with Extensor Synergy Pattern in the Lower Limbs

Authors: Y. H. Chen, P. Y. Chiang, T. Sugiarto, I. Karsuna, Y. J. Lin, C. C. Chang, W. C. Hsu

Abstract:

Task-specific training with intense practice of functional tasks has been emphasized for the approaches in motor rehabilitation in patients with hemiplegic strokes. Although reciprocal actions which may increase demands on motor control during seated stepping exercise, motor control is not explicitly trained with emphasis and instruction focused on traditional strengthening. Apart from cycling and treadmill, various forms of seated exerciser are becoming available for the lower extremity exercise. The benefit of seated exerciser has been focused on the effect on the cardiopulmonary system. Thus, the aim of current study is to investigate the effect of seated distance on muscle activation during seated strengthening in patients with stroke with extensor synergy pattern in the lower extremities. Electrodes were placed on the surface of lower limbs muscles, including rectus femoris (RF), vastus lateralis (VL), biceps femoris (BF) and gastrocnemius (GT) of both sides. Maximal voluntary contraction (MVC) of the muscles were obtained to normalize the EMG amplitude obtained during dynamic trials with analog raw data digitized with a sampling frequency of 2000 Hz, fully rectified and the linear enveloped. Movement cycle was separated into two phases by pushing (PP) and Return (RP). Integral EMG (iEMG) is then used to quantify level of activation during each of the phases. Subjects performed strengthening with moderate resistance with speed of 60 rpm in two different distances (D1, short) and (D2, long). The results showed greater iEMG in RF and smaller iEMG in VL and BF with obvious increase range of motion of hip flexion in D1 condition. On the contrary, no significant involvement of RF while greater level of muscular activation in VL and BF during RP was found during PP in D2 condition. In addition, greater hip internal rotation was observed in D2 condition. In patients with stroke with abnormal tone revealed by extensor synergy in the lower extremities, shorter seated distance is suggested to facilitate hip flexor muscle activation while avoid inducing hyper extensor tone which may prevent a smooth repetitive motion. Repetitive muscular contraction exercise of hip flexor may be helpful for further gait training as it may assist hip flexion during swing phase of the walking.

Keywords: seated strengthening, patients with stroke, electromyography, synergy pattern

Procedia PDF Downloads 215
880 Application of Free Living Nitrogen Fixing Bacteria to Increase Productivity of Potato in Field

Authors: Govinda Pathak

Abstract:

In modern agriculture, the sustainable enhancement of crop productivity while minimizing environmental impacts remains a paramount challenge. Plant Growth Promoting Rhizobacteria (PGPR) have emerged as a promising solution to address this challenge. The rhizosphere, the dynamic interface between plant roots and soil, hosts intricate microbial interactions crucial for plant health and nutrient acquisition. PGPR, a subset of rhizospheric microorganisms, exhibit multifaceted beneficial effects on plants. Their abilities to stimulate growth, confer stress tolerance, enhance nutrient availability, and suppress pathogens make them invaluable contributors to sustainable agriculture. This work examines the pivotal role of free living nitrogen fixer in optimizing agricultural practices. We delve into the intricate mechanisms underlying PGPR-mediated plant-microbe interactions, encompassing quorum sensing, root exudate modulation, and signaling molecule exchange. Furthermore, we explore the diverse strategies employed by PGPR to enhance plant resilience against abiotic stresses such as drought, salinity, and metal toxicity. Additionally, we highlight the role of PGPR in augmenting nutrient acquisition and soil fertility through mechanisms such as nitrogen fixation, phosphorus solubilization, and mineral mobilization. Furthermore, we discuss the potential of PGPR in minimizing the reliance on chemical fertilizers and pesticides, thereby contributing to environmentally friendly agriculture. However, harnessing the full potential of PGPR requires a comprehensive understanding of their interactions with host plants and the surrounding microbial community. We also address challenges associated with PGPR application, including formulation, compatibility, and field efficacy. As the quest for sustainable agriculture intensifies, harnessing the remarkable attributes of PGPR offers a holistic approach to propel agricultural productivity while maintaining ecological balance. This work underscores the promising prospect of free living nitrogen fixer as a panacea for addressing critical agricultural challenges regarding chemical urea in an era of sustainable and resilient food production.

Keywords: PGPR, nitrogen fixer, quorum sensing, Rhizobacteria, pesticides

Procedia PDF Downloads 63
879 Parallels between Training Parameters of High-Performance Athletes Determining the Long-Term Adaptation of the Body in Various Sports: Case Study on Different Types of Training and Their Gender Conditioning

Authors: Gheorghe Braniste

Abstract:

Gender gap has always been in dispute when comparing records and has been a major factor influencing best performances in various sports. Consequently, our study registers the evolution of the difference between men's and women’s best performances within either cyclic or acyclic sports, considering the fact that the training sessions of high performance athletes prove both similarities and differences in long-term adaptation of their body to stress and effort in breaking limits and records. Firstly, for a correct interpretation of the data and tables included in this paper, we must point out that the intense muscular activity has a considerable impact on the structural organization of the organs and systems of the performer's body through the mechanism of motor-visceral reflexes, forming a high working capacity suitable for intense muscular activity. The opportunity to obtaine high sports results during the official competitions is due, on the one hand, to the genetic characteristics of the athlete's body, and on the other hand, to the fact that playing professional sports leaves its mark on the vital morphological and functional parameters. The aim of our research is to study the landmarking differences between male and female athletes and their physical development, together with their growing capacity to stand up to the functional training during the competitive period of their annual training cycle. In order to evaluate the physical development of the athletes, the data of the anthropometric screenings obtained at the Olympic Training Center of the selected teams of the Republic of Moldova were interpreted and rated. During the study of physical development in terms of body height and weight, vital capacity, thoracic excursion, maximum force (Fmax), dynamometry of the hand and back, a further evaluation of the physical development indices that allow an evaluation of complex physical development were registered. The interdependence of the results obtained in performance sports with the morphological and functional particularities of the athletes' body is firmly determined and cannot be disputed. Nevertheless, registered data proved that with the increase of the training capacity, the morphological and functional abilities of the female body increase and, in some respects, approach and even slightly surpass the men in certain sports.

Keywords: physical development, indices, parameters, active body weight, morphological maturity, physical performance

Procedia PDF Downloads 124
878 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 259
877 Reconceptualizing “Best Practices” in Public Sector

Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani

Abstract:

Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.

Keywords: benchmarking, action research, critical realism, best practices, public sector

Procedia PDF Downloads 129
876 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches

Authors: Mariam Matiashvili

Abstract:

Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.

Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon

Procedia PDF Downloads 74
875 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility

Procedia PDF Downloads 266
874 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 136
873 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles

Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan

Abstract:

Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.

Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks

Procedia PDF Downloads 55
872 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS

Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh

Abstract:

A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.

Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials

Procedia PDF Downloads 276
871 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity

Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta

Abstract:

During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.

Keywords: extra vehicular activity, biomechanics, inverse kinematics, human body modeling

Procedia PDF Downloads 342
870 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 124
869 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India

Authors: Soumita Banerjee

Abstract:

The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.

Keywords: dam failure, embankments, flood, rainfall

Procedia PDF Downloads 226
868 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 187
867 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 69
866 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 94
865 Grain Structure Evolution during Friction-Stir Welding of 6061-T6 Aluminum Alloy

Authors: Aleksandr Kalinenko, Igor Vysotskiy, Sergey Malopheyev, Sergey Mironov, Rustam Kaibyshev

Abstract:

From a thermo-mechanical standpoint, friction-stir welding (FSW) represents a unique combination of very large strains, high temperature and relatively high strain rate. The material behavior under such extreme deformation conditions is not studied well and thus, the microstructural examinations of the friction-stir welded materials represent an essential academic interest. Moreover, a clear understanding of the microstructural mechanisms operating during FSW should improve our understanding of the microstructure-properties relationship in the FSWed materials and thus enables us to optimize their service characteristics. Despite extensive research in this field, the microstructural behavior of some important structural materials remains not completely clear. In order to contribute to this important work, the present study was undertaken to examine the grain structure evolution during the FSW of 6061-T6 aluminum alloy. To provide an in-depth insight into this process, the electron backscatter diffraction (EBSD) technique was employed for this purpose. Microstructural observations were conducted by using an FEI Quanta 450 Nova field-emission-gun scanning electron microscope equipped with TSL OIMTM software. A suitable surface finish for EBSD was obtained by electro-polishing in a solution of 25% nitric acid in methanol. A 15° criterion was employed to differentiate low-angle boundaries (LABs) from high-angle boundaries (HABs). In the entire range of the studied FSW regimes, the grain structure evolved in the stir zone was found to be dominated by nearly-equiaxed grains with a relatively high fraction of low-angle boundaries and the moderate-strength B/-B {112}<110> simple-shear texture. In all cases, the grain-structure development was found to be dictated by an extensive formation of deformation-induced boundaries, their gradual transformation to the high-angle grain boundaries. Accordingly, the grain subdivision was concluded to the key microstructural mechanism. Remarkably, a gradual suppression of this mechanism has been observed at relatively high welding temperatures. This surprising result has been attributed to the reduction of dislocation density due to the annihilation phenomena.

Keywords: electron backscatter diffraction, friction-stir welding, heat-treatable aluminum alloys, microstructure

Procedia PDF Downloads 237