Search results for: critical speed range
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13928

Search results for: critical speed range

13298 Public-Private Partnership for Critical Infrastructure Resilience

Authors: Anjula Negi, D. T. V. Raghu Ramaswamy, Rajneesh Sareen

Abstract:

Road infrastructure is emphatically one of the top most critical infrastructure to the Indian economy. Road network in the country of around 3.3 million km is the second largest in the world. Nationwide statistics released by Ministry of Road, Transport and Highways reveal that every minute an accident happens and one death every 3.7 minutes. This reported scale in terms of safety is a matter of grave concern, and economically represents a national loss of 3% to the GDP. Union Budget 2016-17 has allocated USD 12 billion annually for development and strengthening of roads, an increase of 56% from last year. Thus, highlighting the importance of roads as critical infrastructure. National highway alone represent only 1.7% of the total road linkages, however, carry over 40% of traffic. Further, trends analysed from 2002 -2011 on national highways, indicate that in less than a decade, a 22 % increase in accidents have been reported, but, 68% increase in death fatalities. Paramount inference is that accident severity has increased with time. Over these years many measures to increase road safety, lessening damage to physical assets, reducing vulnerabilities leading to a build-up for resilient road infrastructure have been taken. In the context of national highway development program, policy makers proposed implementation of around 20 % of such road length on PPP mode. These roads were taken up on high-density traffic considerations and for qualitative implementation. In order to understand resilience impacts and safety parameters, enshrined in various PPP concession agreements executed with the private sector partners, such highway specific projects would be appraised. This research paper would attempt to assess such safety measures taken and the possible reasons behind an increase in accident severity through these PPP case study projects. Delving further on safety features to understand policy measures adopted in these cases and an introspection on reasons of severity, whether an outcome of increased speeds, faulty road design and geometrics, driver negligence, or due to lack of discipline in following lane traffic with increased speed. Assessment exercise would study these aspects hitherto to PPP and post PPP project structures, based on literature review and opinion surveys with sectoral experts. On the way forward, it is understood that the Ministry of Road, Transport and Highway’s estimate for strengthening the national highway network is USD 77 billion within next five years. The outcome of this paper would provide an understanding of resilience measures adopted, possible options for accessible and safe road network and its expansion to policy makers for possible policy initiatives and funding allocation in securing critical infrastructure.

Keywords: national highways, policy, PPP, safety

Procedia PDF Downloads 257
13297 Analyzing the Critical Factors Influencing Employees' Tacit and Explicit Knowledge Sharing Intentions for Sustainable Competitive Advantage: A Systematic Review and a Conceptual Framework

Authors: Made Ayu Aristyana Dewi

Abstract:

Due to the importance of knowledge in today’s competitive world, an understanding of how to enhance employee knowledge sharing has become critical. This study discerning employees’ knowledge sharing intentions according to the type of knowledge to be shared, whether tacit or explicit. This study provides a critical and systematic review of the current literature on knowledge sharing, with a particular focus on the most critical factors influencing employees’ tacit and explicit knowledge sharing intentions. The extant literature was identified through four electronic databases, from 2006 to 2016. The findings of this review reveal that most of the previous studies only focus on individual and social factors as the antecedents of knowledge sharing intention. Therefore, those previous studies did not consider some other potential factors, like organizational and technological factors that may hinder the progress of knowledge sharing processes. Based on the findings of the critical review, a conceptual framework is proposed, which presents the antecedents of employees’ tacit and explicit knowledge sharing intentions and its impact on innovation and sustainable competitive advantage.

Keywords: antecedents, explicit knowledge, individual factors, innovation, intentions, knowledge sharing, organizational factors, social factors, sustainable competitive advantage, tacit knowledge, technological factors

Procedia PDF Downloads 319
13296 Electrospray Deposition Technique of Dye Molecules in the Vacuum

Authors: Nouf Alharbi

Abstract:

The electrospray deposition technique became an important method that enables fragile, nonvolatile molecules to be deposited in situ in high vacuum environments. Furthermore, it is considered one of the ways to close the gap between basic surface science and molecular engineering, which represents a gradual change in the range of scientist research. Also, this paper talked about one of the most important techniques that have been developed and aimed for helping to further develop and characterize the electrospray by providing data collected using an image charge detection instrument. Image charge detection mass spectrometry (CDMS) is used to measure speed and charge distributions of the molecular ions. As well as, some data has been included using SIMION simulation to simulate the energies and masses of the molecular ions through the system in order to refine the mass-selection process.

Keywords: charge, deposition, electrospray, image, ions, molecules, SIMION

Procedia PDF Downloads 133
13295 Asymptotic Expansion of Double Oscillatory Integrals: Contribution of Non Stationary Critical Points of the Second Kind

Authors: Abdallah Benaissa

Abstract:

In this paper, we consider the problem of asymptotics of double oscillatory integrals in the case of critical points of the second kind, the order of contact between the boundary and a level curve of the phase being even, the situation when the order of contact is odd will be studied in other occasions. Complete asymptotic expansions will be derived and the coefficient of the leading term will be computed in terms of the original data of the problem. A multitude of people have studied this problem using a variety of methods, but only in a special case when the order of contact is minimal: the more cited papers are a paper of Jones and Kline and an other one of Chako. These integrals are encountered in many areas of science, especially in problems of diffraction of optics.

Keywords: asymptotic expansion, double oscillatory integral, critical point of the second kind, optics diffraction

Procedia PDF Downloads 350
13294 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control

Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin

Abstract:

The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.

Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics

Procedia PDF Downloads 75
13293 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 171
13292 Analysis and Modeling of the Building’s Facades in Terms of Different Convection Coefficients

Authors: Enes Yasa, Guven Fidan

Abstract:

Building Simulation tools need to better evaluate convective heat exchanges between external air and wall surfaces. Previous analysis demonstrated the significant effects of convective heat transfer coefficient values on the room energy balance. Some authors have pointed out that large discrepancies observed between widely used building thermal models can be attributed to the different correlations used to calculate or impose the value of the convective heat transfer coefficients. Moreover, numerous researchers have made sensitivity calculations and proved that the choice of Convective Heat Transfer Coefficient values can lead to differences from 20% to 40% of energy demands. The thermal losses to the ambient from a building surface or a roof mounted solar collector represent an important portion of the overall energy balance and depend heavily on the wind induced convection. In an effort to help designers make better use of the available correlations in the literature for the external convection coefficients due to the wind, a critical discussion and a suitable tabulation is presented, on the basis of algebraic form of the coefficients and their dependence upon characteristic length and wind direction, in addition to wind speed. Many research works have been conducted since early eighties focused on the convection heat transfer problems inside buildings. In this context, a Computational Fluid Dynamics (CFD) program has been used to predict external convective heat transfer coefficients at external building surfaces. For the building facades model, effects of wind speed and temperature differences between the surfaces and the external air have been analyzed, showing different heat transfer conditions and coefficients. In order to provide further information on external convective heat transfer coefficients, a numerical work is presented in this paper, using a Computational Fluid Dynamics (CFD) commercial package (CFX) to predict convective heat transfer coefficients at external building surface.

Keywords: CFD in buildings, external convective heat transfer coefficients, building facades, thermal modelling

Procedia PDF Downloads 421
13291 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 92
13290 Dynamic Test for Stability of Columns in Sway Mode

Authors: Elia Efraim, Boris Blostotsky

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, end-fixity factor, sway mode

Procedia PDF Downloads 351
13289 Alternative Ways of Knowing and the Construction of a Department Around a Common Critical Lens

Authors: Natalie Delia

Abstract:

This academic paper investigates the transformative potential of incorporating alternative ways of knowing within the framework of Critical Studies departments. Traditional academic paradigms often prioritize empirical evidence and established methodologies, potentially limiting the scope of critical inquiry. In response to this, our research seeks to illuminate the benefits and challenges associated with integrating alternative epistemologies, such as indigenous knowledge systems, artistic expressions, and experiential narratives. Drawing upon a comprehensive review of literature and case studies, we examine how alternative ways of knowing can enrich and diversify the intellectual landscape of Critical Studies departments. By embracing perspectives that extend beyond conventional boundaries, departments may foster a more inclusive and holistic understanding of critical issues. Additionally, we explore the potential impact on pedagogical approaches, suggesting that alternative ways of knowing can stimulate alternative way of teaching methods and enhance student engagement. Our investigation also delves into the institutional and cultural shifts necessary to support the integration of alternative epistemologies within academic settings. We address concerns related to validation, legitimacy, and the potential clash with established norms, offering insights into fostering an environment that encourages intellectual pluralism. Furthermore, the paper considers the implications for interdisciplinary collaboration and the potential for cultivating a more responsive and socially engaged scholarship. By encouraging a synthesis of diverse perspectives, Critical Studies departments may be better equipped to address the complexities of contemporary issues, encouraging a dynamic and evolving field of study. In conclusion, this paper advocates for a paradigm shift within Critical Studies departments towards a more inclusive and expansive approach to knowledge production. By embracing alternative ways of knowing, departments have the opportunity to not only diversify their intellectual landscape but also to contribute meaningfully to broader societal dialogues, addressing pressing issues with renewed depth and insight.

Keywords: critical studies, alternative ways of knowing, academic department, Wallerstein

Procedia PDF Downloads 72
13288 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: automatic target recognition (ATR), high resolution range profile (HRRP), motion compensation, stepped frequency waveform technique (SFW), target motion parameters (TMPs)

Procedia PDF Downloads 152
13287 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 155
13286 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory

Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short

Abstract:

Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.

Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling

Procedia PDF Downloads 174
13285 Establishment and Validation of Correlation Equations to Estimate Volumetric Oxygen Mass Transfer Coefficient (KLa) from Process Parameters in Stirred-Tank Bioreactors Using Response Surface Methodology

Authors: Jantakan Jullawateelert, Korakod Haonoo, Sutipong Sananseang, Sarun Torpaiboon, Thanunthon Bowornsakulwong, Lalintip Hocharoen

Abstract:

Process scale-up is essential for the biological process to increase production capacity from bench-scale bioreactors to either pilot or commercial production. Scale-up based on constant volumetric oxygen mass transfer coefficient (KLa) is mostly used as a scale-up factor since oxygen supply is one of the key limiting factors for cell growth. However, to estimate KLa of culture vessels operated with different conditions are time-consuming since it is considerably influenced by a lot of factors. To overcome the issue, this study aimed to establish correlation equations of KLa and operating parameters in 0.5 L and 5 L bioreactor employed with pitched-blade impeller and gas sparger. Temperature, gas flow rate, agitation speed, and impeller position were selected as process parameters and equations were created using response surface methodology (RSM) based on central composite design (CCD). In addition, the effects of these parameters on KLa were also investigated. Based on RSM, second-order polynomial models for 0.5 L and 5 L bioreactor were obtained with an acceptable determination coefficient (R²) as 0.9736 and 0.9190, respectively. These models were validated, and experimental values showed differences less than 10% from the predicted values. Moreover, RSM revealed that gas flow rate is the most significant parameter while temperature and agitation speed were also found to greatly affect the KLa in both bioreactors. Nevertheless, impeller position was shown to influence KLa in only 5L system. To sum up, these modeled correlations can be used to accurately predict KLa within the specified range of process parameters of two different sizes of bioreactors for further scale-up application.

Keywords: response surface methodology, scale-up, stirred-tank bioreactor, volumetric oxygen mass transfer coefficient

Procedia PDF Downloads 207
13284 Liquid-Liquid Transitions in Strontium Tellurite Melts

Authors: Rajinder Kaur, Atul Khanna

Abstract:

Transparent glass-ceramic and crystalline samples of the system: xSrO-(100-x)TeO2; x = 7.5 and 8.5 mol% were prepared by quenching the melts in the temperature range of 700 to 950oC. A very interesting effect of the temperature on the glass-forming ability (GFA) of strontium tellurite melts is observed,and it is found that the melts produce transparent glass-ceramics when it is solidified from lower temperatures in the range of 700-750oC, however, when the melts are cooled from higher temperatures in the range of 850-950oC, the GFA is significantly reduced andanti-glass and/or crystalline phases are produced on solidification.The effect of temperature on GFA of strontium tellurite melts is attributed to short-range structural transformations: TeO₄TeO₃ which procceds towards the right side with an increrase in temperature. This isomerization reaction lowers the melt viscosity and enhances the crystallization tedendency. It is concluded that the high-temperature strontium tellurite meltsfreeze faster into crystalline phases as compared to the melts at a lower temperature; the latter supercooland solidify into glassy phases.

Keywords: anti-glasss, ceramic, supercool liquid, raman spectroscopy

Procedia PDF Downloads 83
13283 The Effects of Cardiovascular Risk on Age-Related Cognitive Decline in Healthy Older Adults

Authors: A. Badran, M. Hollocks, H. Markus

Abstract:

Background: Common risk factors for cardiovascular disease are associated with age-related cognitive decline. There has been much interest in treating modifiable cardiovascular risk factors in the hope of reducing cognitive decline. However, there is currently no validated neuropsychological test to assess the subclinical cognitive effects of vascular risk. The Brief Memory and Executive Test (BMET) is a clinical screening tool, which was originally designed to be sensitive and specific to Vascular Cognitive Impairment (VCI), an impairment characterised by decline in frontally-mediated cognitive functions (e.g. Executive Function and Processing Speed). Objective: To cross-sectionally assess the validity of the BMET as a measure of the subclinical effects of vascular risk on cognition, in an otherwise healthy elderly cohort. Methods: Data from 346 participants (57 ± 10 years) without major neurological or psychiatric disorders were included in this study, gathered as part of a previous multicentre validation study for the BMET. Framingham Vascular Age was used as a surrogate measure of vascular risk, incorporating several established risk factors. Principal Components Analysis of the subtests was used to produce common constructs: an index for Memory and another for Executive Function/Processing Speed. Univariate General Linear models were used to relate Vascular Age to performance on Executive Function/Processing Speed and Memory subtests of the BMET, adjusting for Age, Premorbid Intelligence and Ethnicity. Results: Adverse vascular risk was associated with poorer performance on both the Memory and Executive Function/Processing Speed indices, adjusted for Age, Premorbid Intelligence and Ethnicity (p=0.011 and p<0.001, respectively). Conclusions: Performance on the BMET reflects the subclinical effects of vascular risk on cognition, in age-related cognitive decline. Vascular risk is associated with decline in both Executive Function/Processing Speed and Memory groups of subtests. Future studies are needed to explore whether treating vascular risk factors can effectively reduce age-related cognitive decline.

Keywords: age-related cognitive decline, vascular cognitive impairment, subclinical cerebrovascular disease, cognitive aging

Procedia PDF Downloads 471
13282 Adaptive Envelope Protection Control for the below and above Rated Regions of Wind Turbines

Authors: Mustafa Sahin, İlkay Yavrucuk

Abstract:

This paper presents a wind turbine envelope protection control algorithm that protects Variable Speed Variable Pitch (VSVP) wind turbines from damage during operation throughout their below and above rated regions, i.e. from cut-in to cut-out wind speed. The proposed approach uses a neural network that can adapt to turbines and their operating points. An algorithm monitors instantaneous wind and turbine states, predicts a wind speed that would push the turbine to a pre-defined envelope limit and, when necessary, realizes an avoidance action. Simulations are realized using the MS Bladed Wind Turbine Simulation Model for the NREL 5 MW wind turbine equipped with baseline controllers. In all simulations, through the proposed algorithm, it is observed that the turbine operates safely within the allowable limit throughout the below and above rated regions. Two example cases, adaptations to turbine operating points for the below and above rated regions and protections are investigated in simulations to show the capability of the proposed envelope protection system (EPS) algorithm, which reduces excessive wind turbine loads and expectedly increases the turbine service life.

Keywords: adaptive envelope protection control, limit detection and avoidance, neural networks, ultimate load reduction, wind turbine power control

Procedia PDF Downloads 136
13281 Low-Noise Amplifier Design for Improvement of Communication Range for Wake-Up Receiver Based Wireless Sensor Network Application

Authors: Ilef Ketata, Mohamed Khalil Baazaoui, Robert Fromm, Ahmad Fakhfakh, Faouzi Derbel

Abstract:

The integration of wireless communication, e. g. in real-or quasi-real-time applications, is related to many challenges such as energy consumption, communication range, latency, quality of service, and reliability. To minimize the latency without increasing energy consumption, wake-up receiver (WuRx) nodes have been introduced in recent works. Low-noise amplifiers (LNAs) are introduced to improve the WuRx sensitivity but increase the supply current severely. Different WuRx approaches exist with always-on, power-gated, or duty-cycled receiver designs. This paper presents a comparative study for improving communication range and decreasing the energy consumption of wireless sensor nodes.

Keywords: wireless sensor network, wake-up receiver, duty-cycled, low-noise amplifier, envelope detector, range study

Procedia PDF Downloads 112
13280 Effect of Temperature on the Binary Mixture of Imidazolium Ionic Liquid with Pyrrolidin-2-One: Volumetric and Ultrasonic Study

Authors: T. Srinivasa Krishna, K. Narendra, K. Thomas, S. S. Raju, B. Munibhadrayya

Abstract:

The densities, speeds of sound and refractive index of the binary mixture of ionic liquid (IL) 1-Butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([BMIM][Imide]) and Pyrrolidin-2-one(PY) was measured at atmospheric pressure, and over the range of temperatures T= (298.15 -323.15)K. The excess molar volume, excess isentropic compressibility, excess speed of sound, partial molar volumes, and isentropic partial molar compressibility were calculated from the values of the experimental density and speed of sound. From the experimental data excess thermal expansion coefficients and isothermal pressure coefficient of excess molar enthalpy at 298.15K were calculated. The results were analyzed and were discussed from the point of view of structural changes. Excess properties were calculated and correlated by the Redlich–Kister and the Legendre polynomial equation and binary coefficients were obtained. Values of excess partial volumes at infinite dilution for the binary system at different temperatures were calculated from the adjustable parameters obtained from Legendre polynomial and Redlich–Kister smoothing equation. Deviation in refractive indices ΔnD and deviation in molar refraction, ΔRm were calculated from the measured refractive index values. Equations of state and several mixing rules were used to predict refractive indices of the binary mixtures and compared with the experimental values by means of the standard deviation and found to be in excellent agreement. By using Prigogine–Flory–Patterson (PFP) theory, the above thermodynamic mixing functions have been calculated and the results obtained from this theory were compared with experimental results.

Keywords: density, refractive index, speeds of sound, Prigogine-Flory-Patterson theory

Procedia PDF Downloads 408
13279 Distributed Real-Time Range Query Approximation in a Streaming Environment

Authors: Simon Keller, Rainer Mueller

Abstract:

Continuous range queries are a common means to handle mobile clients in high-density areas. Most existing approaches focus on settings in which the range queries for location-based services are more or less static, whereas the mobile clients in the ranges move. We focus on a category called dynamic real-time range queries (DRRQ), assuming that both, clients requested by the query and the inquirers, are mobile. In consequence, the query parameters and the query results continuously change. This leads to two requirements: the ability to deal with an arbitrarily high number of mobile nodes (scalability) and the real-time delivery of range query results. In this paper, we present the highly decentralized solution adaptive quad streaming (AQS) for the requirements of DRRQs. AQS approximates the query results in favor of a controlled real-time delivery and guaranteed scalability. While prior works commonly optimize data structures on the involved servers, we use AQS to focus on a highly distributed cell structure without data structures automatically adapting to changing client distributions. Instead of the commonly used request-response approach, we apply a lightweight streaming method in which no bidirectional communication and no storage or maintenance of queries are required at all.

Keywords: approximation of client distributions, continuous spatial range queries, mobile objects, streaming-based decentralization in spatial mobile environments

Procedia PDF Downloads 146
13278 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form

Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry

Abstract:

Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.

Keywords: hydroxamic acid, related substances, UPLC, valaciclovir

Procedia PDF Downloads 247
13277 To Determine the Effects of Regulatory Food Safety Inspections on the Grades of Different Categories of Retail Food Establishments across the Dubai Region

Authors: Shugufta Mohammad Zubair

Abstract:

This study explores the Effect of the new food System Inspection system also called the new inspection color card scheme on reduction of critical & major food safety violations in Dubai. Data was collected from all retail food service establishments located in two zones in the city. Each establishment was visited twice, once before the launch of the new system and one after the launch of the system. In each visit, the Inspection checklist was used as the evaluation tool for observation of the critical and major violations. The old format of the inspection checklist was concerned with scores based on the violations; but the new format of the checklist for the new inspection color card scheme is divided into administrative, general major and critical which gives a better classification for the inspectors to identify the critical and major violations of concerned. The study found that there has been a better and clear marking of violations after the launch of new inspection system wherein the inspectors are able to mark and categories the violations effectively. There had been a 10% decrease in the number of food establishment that was previously given A grade. The B & C grading were also considerably dropped by 5%.

Keywords: food inspection, risk assessment, color card scheme, violations

Procedia PDF Downloads 324
13276 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 271
13275 Speed Power Control of Double Field Induction Generator

Authors: Ali Mausmi, Ahmed Abbou, Rachid El Akhrif

Abstract:

This research paper aims to reduce the chattering phenomenon due to control by sliding mode control applied on a wind energy conversion system based on the doubly fed induction generator (DFIG). Our goal is to offset the effect of parametric uncertainties and come as close as possible to the dynamic response solicited by the control law in the ideal case and therefore force the active and reactive power generated by the DFIG to accurately follow the reference values which are provided to it. The simulation results using Matlab / Simulink demonstrate the efficiency and performance of the proposed technique while maintaining the simplicity of control by first order sliding mode.

Keywords: control of speed, correction of the equivalent command, induction generator, sliding mode

Procedia PDF Downloads 377
13274 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 73
13273 SVM-DTC Using for PMSM Speed Tracking Control

Authors: Kendouci Khedidja, Mazari Benyounes, Benhadria Mohamed Rachid, Dadi Rachida

Abstract:

In recent years, direct torque control (DTC) has become an alternative to the well-known vector control especially for permanent magnet synchronous motor (PMSM). However, it presents a problem of field linkage and torque ripple. In order to solve this problem, the conventional DTC is combined with space vector pulse width modulation (SVPWM). This control theory has achieved great success in the control of PMSM. That has become a hotspot for resolving. The main objective of this paper gives us an introduction of the DTC and SVPWM-DTC control theory of PMSM which has been simulating on each part of the system via Matlab/Simulink based on the mathematical modeling. Moreover, the outcome of the simulation proved that the improved SVPWM- DTC of PMSM has a good dynamic and static performance.

Keywords: PMSM, DTC, SVM, speed control

Procedia PDF Downloads 389
13272 Dynamic Test for Sway-Mode Buckling of Columns

Authors: Boris Blostotsky, Elia Efraim

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve a values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account a semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, semi-rigid connections, sway mode

Procedia PDF Downloads 313
13271 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation

Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen

Abstract:

Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.

Keywords: fire safety, evacuation, decision-making, at-risk groups

Procedia PDF Downloads 105
13270 Identifying Critical Success Factors for Data Quality Management through a Delphi Study

Authors: Maria Paula Santos, Ana Lucas

Abstract:

Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.

Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort

Procedia PDF Downloads 217
13269 Design Consideration of a Plastic Shredder in Recycling Processes

Authors: Tolulope A. Olukunle

Abstract:

Plastic waste management has emerged as one of the greatest challenges facing developing countries. This paper describes the design of various components of a plastic shredder. This machine is widely used in industries and recycling plants. The introduction of plastic shredder machine will promote reduction of post-consumer plastic waste accumulation and serves as a system for wealth creation and empowerment through conversion of waste into economically viable products. In this design research, a 10 kW electric motor with a rotational speed of 500 rpm was chosen to drive the shredder. A pulley size of 400 mm is mounted on the electric motor at a distance of 1000 mm away from the shredder pulley. The shredder rotational speed is 300 rpm.

Keywords: design, machine, plastic waste, recycling

Procedia PDF Downloads 321