Search results for: setup errors.
503 Effect of Deficit Irrigation on Barley Yield and Water Productivity through Field Experiment and Modeling at Koga Irrigation Scheme, Amhara Region, Ethiopia
Authors: Bekalu Melis Alehegn, Dagnenet Sultan Alemu
Abstract:
The insufficiency of water is the most severe restraint for the expansion of agriculture in arid and semi-arid areas. An important strategy for increasing water productivity and improving water productivity deficit irrigation at different growth stages is important to advance the yield and Water Productivity of barley in water scarce areas. A field experiment was conducted at the Koga irrigation scheme in Ethiopia to examine barley yield response to different irrigation regimes and validate the aqua crop model. The experimental setup comprised six randomized treatments (T) with three replications for one irrigation season because of financial limitations. The irrigation regimes were selected 100%, 75%, and 50% application levels in different growth stages of gross irrigation requirements using trial and error in order to select the optimal water application level. The treatments were: no stress at all (T1), 25% stressed during all crop stages (T2), 50% stressed at all stages (T3), 50% stressed at the development stage (T4), 50% stressed at mid-stage (T5) and 50% stress at initial and late season (T6). The agronomic parameters, including canopy cover, biomass, and grain yield, were collected to compare the ground-based crop yield and the aqua crop model. The results showed that the initial and late stages and stress 25% through the whole season were the right time for practice deficit irrigation without significant yield reduction. The highest (2.62kg/m³) and the lowest (2.03 kg/m³) water productivity were found under T3 and T4, respectively. The stress of 50% at the mid-growth stage and stress 50% of the full irrigation water requirement at all growth stages significantly (α=5%) affected the canopy expansion, biomass and yield production. The aqua Crop model performed well in simulating the yield of barley for most of the treatments (R2 = 0.84 and RMSE = 0.7 t ha–¹).Keywords: aqua crop, barley, deficit irrigation, irrigation regimes, water productivity
Procedia PDF Downloads 33502 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas
Authors: Ibrahim Obeidat
Abstract:
Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay
Procedia PDF Downloads 286501 Transforming Health Information from Manual to Digital (Electronic) World: A Reference and Guide
Authors: S. Karthikeyan, Naveen Bindra
Abstract:
Introduction: To update ourselves and understand the concept of latest electronic formats available for Health care providers and how it could be used and developed as per standards. The idea is to correlate between the patients Manual Medical Records keeping and maintaining patients Electronic Information in a Health care setup in this world. Furthermore this stands with adapting to the right technology depending upon the organization and improve our quality and quantity of Healthcare providing skills. Objective: The concept and theory is to explain the terms of Electronic Medical Record (EMR), Electronic Health Record (EHR) and Personal Health Record (PHR) and selecting the best technical among the available Electronic sources and software before implementing. It is to guide and make sure the technology used by the end users without any doubts and difficulties. The idea is to evaluate is to admire the uses and barriers of EMR-EHR-PHR. Aim and Scope: The target is to achieve the health care providers like Physicians, Nurses, Therapists, Medical Bill reimbursements, Insurances and Government to assess the patient’s information on easy and systematic manner without diluting the confidentiality of patient’s information. Method: Health Information Technology can be implemented with the help of Organisations providing with legal guidelines and help to stand by the health care provider. The main objective is to select the correct embedded and affordable database management software and generating large-scale data. The parallel need is to know how the latest software available in the market. Conclusion: The question lies here is implementing the Electronic information system with healthcare providers and organisation. The clinicians are the main users of the technology and manage us to ‘go paperless’. The fact is that day today changing technologically is very sound and up to date. Basically the idea is to tell how to store the data electronically safe and secure. All three exemplifies the fact that an electronic format has its own benefit as well as barriers.Keywords: medical records, digital records, health information, electronic record system
Procedia PDF Downloads 463500 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI
Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz
Abstract:
Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI
Procedia PDF Downloads 523499 Evaluating Language Loss Effect on Autobiographical Memory by Examining Memory Phenomenology in Bilingual Speakers
Authors: Anastasia Sorokina
Abstract:
Graduate language loss or attrition has been well documented in individuals who migrate and become emersed in a different language environment. This phenomenon of first language (L1) attrition is an example of non-pathological (not due to trauma) and can manifest itself in frequent pauses, search for words, or grammatical errors. While the widely experienced loss of one’s first language might seem harmless, there is convincing evidence from the disciplines of Developmental Psychology, Bilingual Studies, and even Psychotherapy that language plays a crucial role in the memory of self. In fact, we remember, store, and share personal memories with the help of language. Dual-Coding Theory suggests that language memory code deterioration could lead to forgetting. Yet, no one has investigated a possible connection between language loss and memory. The present study aims to address this research gap by examining a corpus of 1,495 memories of Russian-English bilinguals who are on a continuum of L1 (first language) attrition. Since phenomenological properties capture how well a memory is remembered, the following descriptors were selected - vividness, ease of recall, emotional valence, personal significance, and confidence in the event. A series of linear regression statistical analyses were run to examine the possible negative effects of L1 attrition on autobiographical memory. The results revealed that L1 attrition might compromise perceived vividness and confidence in the event, which is indicative of memory deterioration. These findings suggest the importance of heritage language maintenance in immigrant communities who might be forced to assimilate as language loss might negatively affect the memory of self.Keywords: L1 attrition, autobiographical memory, language loss, memory phenomenology, dual coding
Procedia PDF Downloads 120498 Passively Q-Switched 914 nm Microchip Laser for LIDAR Systems
Authors: Marco Naegele, Klaus Stoppel, Thomas Dekorsy
Abstract:
Passively Q-switched microchip lasers enable the great potential for sophisticated LiDAR systems due to their compact overall system design, excellent beam quality, and scalable pulse energies. However, many near-infrared solid-state lasers show emitting wavelengths > 1000 nm, which are not compatible with state-of-the-art silicon detectors. Here we demonstrate a passively Q-switched microchip laser operating at 914 nm. The microchip laser consists of a 3 mm long Nd:YVO₄ crystal as a gain medium, while Cr⁴⁺:YAG with an initial transmission of 98% is used as a saturable absorber. Quasi-continuous pumping enables single pulse operation, and low duty cycles ensure low overall heat generation and power consumption. Thus, thermally induced instabilities are minimized, and operation without active cooling is possible while ambient temperature changes are compensated by adjustment of the pump laser current only. Single-emitter diode pumping at 808 nm leads to a compact overall system design and robust setup. Utilization of a microchip cavity approach ensures single-longitudinal mode operation with spectral bandwidths in the picometer regime and results in short laser pulses with pulse durations below 10 ns. Beam quality measurements reveal an almost diffraction-limited beam and enable conclusions concerning the thermal lens, which is essential to stabilize the plane-plane resonator. A 7% output coupler transmissivity is used to generate pulses with energies in the microjoule regime and peak powers of more than 600 W. Long-term pulse duration, pulse energy, central wavelength, and spectral bandwidth measurements emphasize the excellent system stability and facilitate the utilization of this laser in the context of a LiDAR system.Keywords: diode-pumping, LiDAR system, microchip laser, Nd:YVO4 laser, passively Q-switched
Procedia PDF Downloads 134497 Tuning of Kalman Filter Using Genetic Algorithm
Authors: Hesham Abdin, Mohamed Zakaria, Talaat Abd-Elmonaem, Alaa El-Din Sayed Hafez
Abstract:
Kalman filter algorithm is an estimator known as the workhorse of estimation. It has an important application in missile guidance, especially in lack of accurate data of the target due to noise or uncertainty. In this paper, a Kalman filter is used as a tracking filter in a simulated target-interceptor scenario with noise. It estimates the position, velocity, and acceleration of the target in the presence of noise. These estimations are needed for both proportional navigation and differential geometry guidance laws. A Kalman filter has a good performance at low noise, but a large noise causes considerable errors leads to performance degradation. Therefore, a new technique is required to overcome this defect using tuning factors to tune a Kalman filter to adapt increasing of noise. The values of the tuning factors are between 0.8 and 1.2, they have a specific value for the first half of range and a different value for the second half. they are multiplied by the estimated values. These factors have its optimum values and are altered with the change of the target heading. A genetic algorithm updates these selections to increase the maximum effective range which was previously reduced by noise. The results show that the selected factors have other benefits such as decreasing the minimum effective range that was increased earlier due to noise. In addition to, the selected factors decrease the miss distance for all ranges of this direction of the target, and expand the effective range which leads to increase probability of kill.Keywords: proportional navigation, differential geometry, Kalman filter, genetic algorithm
Procedia PDF Downloads 514496 Utility, Satisfaction and Necessity of Urban Parks: An Empirical Study of Two Suburban Parks of Kolkata Metropolitan Area, India
Authors: Jaydip De
Abstract:
Urban parks are open places, green fields and riverside gardens usually maintained by public or private authorities, or eventually by both jointly; and utilized for a multidimensional purpose by the citizens. These parks are indeed the lung of urban centers. In urban socio-environmental setup, parks are the nucleus of social integration, community building, and physical development. In contemporary cities, these green places seem to perform as the panacea of congested, complex and stressful urban life. The alarmingly increasing urban population and the resultant congestion of high-rises are making life wearisome in neo-liberal cities. This has made the citizen always quest for open space and fresh air. In such a circumstance, the mere existence of parks is not capable of satisfying the growing aspirations. Therefore in this endeavour, a structured attempt is so made to empirically identify the utility, visitors’ satisfaction, and future needs through the cases of two urban parks of Kolkata Metropolitan Area, India. This study is principally based upon primary information collected through visitors’ perception survey conducted at the Chinsurah ground and Chandernagore strand. The correlation between different utility categories is identified and analyzed systematically. At the same time, indices like Weighted Satisfaction Score (WSS), Facility wise Satisfaction Index (FSI), Urban Park Satisfaction Index (UPSI) and Urban Park Necessity Index (UPNI) are advocated to quantify the visitors’ satisfaction and future necessities. It is explored that the most important utilities are passive in nature. Simultaneously, satisfaction levels of visitors are average, and their requirements are centred on the daily needs of the next generation, i.e., the children. Further, considering the visitors’ opinion planning measures are promulgated for holistic development of urban parks to revitalize sustainability of citified life.Keywords: citified life, future needs, visitors’ satisfaction, urban parks, utility
Procedia PDF Downloads 183495 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets
Authors: Song Hyok Choe
Abstract:
This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling
Procedia PDF Downloads 52494 Chassis Level Control Using Proportional Integrated Derivative Control, Fuzzy Logic and Deep Learning
Authors: Atakan Aral Ormancı, Tuğçe Arslantaş, Murat Özcü
Abstract:
This study presents the design and implementation of an experimental chassis-level system for various control applications. Specifically, the height level of the chassis is controlled using proportional integrated derivative, fuzzy logic, and deep learning control methods. Real-time data obtained from height and pressure sensors installed in a 6x2 truck chassis, in combination with pulse-width modulation signal values, are utilized during the tests. A prototype pneumatic system of a 6x2 truck is added to the setup, which enables the Smart Pneumatic Actuators to function as if they were in a real-world setting. To obtain real-time signal data from height sensors, an Arduino Nano is utilized, while a Raspberry Pi processes the data using Matlab/Simulink and provides the correct output signals to control the Smart Pneumatic Actuator in the truck chassis. The objective of this research is to optimize the time it takes for the chassis to level down and up under various loads. To achieve this, proportional integrated derivative control, fuzzy logic control, and deep learning techniques are applied to the system. The results show that the deep learning method is superior in optimizing time for a non-linear system. Fuzzy logic control with a triangular membership function as the rule base achieves better outcomes than proportional integrated derivative control. Traditional proportional integrated derivative control improves the time it takes to level the chassis down and up compared to an uncontrolled system. The findings highlight the superiority of deep learning techniques in optimizing the time for a non-linear system, and the potential of fuzzy logic control. The proposed approach and the experimental results provide a valuable contribution to the field of control, automation, and systems engineering.Keywords: automotive, chassis level control, control systems, pneumatic system control
Procedia PDF Downloads 84493 An Historical Revision of Change and Configuration Management Process
Authors: Expedito Pinto De Paula Junior
Abstract:
Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.Keywords: changes, configuration management, historical, revision
Procedia PDF Downloads 206492 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM
Authors: Jaechoul Shin, Juhwan Hwang
Abstract:
In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork
Procedia PDF Downloads 448491 Study of Side Effects of Myopia Contact Correction by Soft Lenses and Orthokeratology Lenses among Medical Students
Authors: K. Iu. Hrizhymalska, O. Ol. Andrushkova, I. Iu. Pshenychna
Abstract:
Aim. To study and copare the side effects of myopia contact correction by soft lenses and orthokeratology lenses among medical students. Patients and methods: 34 students (68 eyes) with moderate and severe myopia, who used contact correction of myopia for 2-4 years, were examined. Some of them used soft lenses, while others - orthokeratology lenses. Methods were used: biomicroscopy of the eye surface, Schirmer's test, Norn's test, survey regarding satisfaction with use. Results. Corneal vascularization along the limbus was noted in 4 (5%) eyes of the examined students. In 8 (11%) eyes, symptoms of mild dry eye disease were detected. 2 (3%) eyes showed signs of meibomitis. Allergic conjunctivitis was observed in 4 (5%) eyes, and a purulent corneal ulcer was present in 1 eye. Surveys have shown that orthokeratology lenses unlike soft lenses don't limit everyday activity (in sports, tourism, swimming etc.), they also don't cause discomfort during temperature changes and reduce existing symptoms of dry eye disease. Conclusion. Thus, myopia contact correction is one of the optimal options among students, which allows to expand physical and mental activity. However, taking into account the frequency of side effects in users of soft contact lenses, it is necessary to carry out prevention and treatment of myopia in medical students, follow the recommendations for use, instill preservative-free tear substitutes with trehalose when symptoms of dry eye appear. Also when side reactions occur, contact correction with soft lenses should be changed to orthokeratology lenses.Keywords: correction, myopia, soft lenses, orthokeratology, specracles, cornea, dry eye, side effects, refractive errors
Procedia PDF Downloads 56490 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 117489 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 144488 Learner's Difficulties Acquiring English: The Case of Native Speakers of Rio de La Plata Spanish Towards Justifying the Need for Corpora
Authors: Maria Zinnia Bardas Hoffmann
Abstract:
Contrastive Analysis (CA) is the systematic comparison between two languages. It stems from the notion that errors are caused by interference of the L1 system in the acquisition process of an L2. CA represents a useful tool to understand the nature of learning and acquisition. Also, this particular method promises a path to un-derstand the nature of underlying cognitive processes, even when other factors such as intrinsic motivation and teaching strategies were found to best explain student’s problems in acquisition. CA study is justified not only from the need to get a deeper understanding of the nature of SLA, but as an invaluable source to provide clues, at a cognitive level, for those general processes involved in rule formation and abstract thought. It is relevant for cross disciplinary studies and the fields of Computational Thought, Natural Language processing, Applied Linguistics, Cognitive Linguistics and Math Theory. That being said, this paper intends to address here as well its own set of constraints and limitations. Finally, this paper: (a) aims at identifying some of the difficulties students may find in their learning process due to the nature of their specific variety of L1, Rio de la Plata Spanish (RPS), (b) represents an attempt to discuss the necessity for specific models to approach CA.Keywords: second language acquisition, applied linguistics, contrastive analysis, applied contrastive analysis English language department, meta-linguistic rules, cross-linguistics studies, computational thought, natural language processing
Procedia PDF Downloads 155487 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy
Authors: Hao Wang, Shengchun Wang, Weidong Wang
Abstract:
On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.Keywords: curvature entropy, profile registration, rail wear, structured light, train-running
Procedia PDF Downloads 266486 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark
Procedia PDF Downloads 84485 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 151484 The Use of a Miniature Bioreactor as Research Tool for Biotechnology Process Development
Authors: Muhammad Zainuddin Arriafdi, Hamudah Hakimah Abdullah, Mohd Helmi Sani, Wan Azlina Ahmad, Muhd Nazrul Hisham Zainal Alam
Abstract:
The biotechnology process development demands numerous experimental works. In laboratory environment, this is typically carried out using a shake flask platform. This paper presents the design and fabrication of a miniature bioreactor system as an alternative research tool for bioprocessing. The working volume of the reactor is 100 ml, and it is made of plastic. The main features of the reactor included stirring control, temperature control via the electrical heater, aeration strategy through a miniature air compressor, and online optical cell density (OD) sensing. All sensors and actuators integrated into the reactor was controlled using an Arduino microcontroller platform. In order to demonstrate the functionality of such miniature bioreactor concept, series of batch Saccharomyces cerevisiae fermentation experiments were performed under various glucose concentrations. Results attained from the fermentation experiments were utilized to solve the Monod equation constants, namely the saturation constant, Ks, and cells maximum growth rate, μmax as to further highlight the usefulness of the device. The mixing capacity of the reactor was also evaluated. It was found that the results attained from the miniature bioreactor prototype were comparable to results achieved using a shake flask. The unique features of the device as compared to shake flask platform is that the reactor mixing condition is much more comparable to a lab-scale bioreactor setup. The prototype is also integrated with an online OD sensor, and as such, no sampling was needed to monitor the progress of the reaction performed. Operating cost and medium consumption are also low and thus, making it much more economical to be utilized for biotechnology process development compared to lab-scale bioreactors.Keywords: biotechnology, miniature bioreactor, research tools, Saccharomyces cerevisiae
Procedia PDF Downloads 121483 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 479482 Implications of Climate Change and World Uncertainty for Gender Inequality: Global Evidence
Authors: Kashif Nesar Rather, Mantu Kumar Mahalik
Abstract:
The discourse surrounding climate change has gained considerable traction, with a discernible emphasis on its nuanced and consequential impact on gender inequality. Concurrently, escalating global tensions are contributing to heightened uncertainty, potentially exerting influence on gender disparities. Within this framework, this study attempts to empirically investigate the implications of climate change and world uncertainty on the gender inequality for a balanced panel of 100 economies between 1995 to 2021. The estimated models also control for the effects of globalisation, economic growth, and education expenditure. The panel cointegration tests establish a significant long-run relationship between the variables of the study. Furthermore, the PMG-ARDL (Panel mean group-Autoregressive distributed lag model) estimation technique confirms that both climate change and world uncertainty perpetuate the global gender inequalities. Additionally, the results establish that globalisation, economic growth, and education expenditure exert a mitigating influence on gender inequality, signifying their role in diminishing gender disparities. These findings are further confirmed by the FGLS (Feasible Generalized Least Squares) and DKSE (Driscoll-Kraay Standard Errors) regression methods. Potential policy implications for mitigating the detrimental gender ramifications stemming from climate change and rising world uncertainties are also discussed.Keywords: gender inequality, world uncertainty, climate change, globalisation., ecological footprint
Procedia PDF Downloads 43481 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate
Authors: Shweta Hoyani, Charlie Oommen
Abstract:
HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability
Procedia PDF Downloads 426480 Analysis of Cascade Control Structure in Train Dynamic Braking System
Authors: B. Moaveni, S. Morovati
Abstract:
In recent years, increasing the usage of railway transportations especially in developing countries caused more attention to control systems railway vehicles. Consequently, designing and implementing the modern control systems to improve the operating performance of trains and locomotives become one of the main concerns of researches. Dynamic braking systems is an important safety system which controls the amount of braking torque generated by traction motors, to keep the adhesion coefficient between the wheel-sets and rail road in optimum bound. Adhesion force has an important role to control the braking distance and prevent the wheels from slipping during the braking process. Cascade control structure is one of the best control methods for the wide range of industrial plants in the presence of disturbances and errors. This paper presents cascade control structure based on two forward simple controllers with two feedback loops to control the slip ratio and braking torque. In this structure, the inner loop controls the angular velocity and the outer loop control the longitudinal velocity of the locomotive that its dynamic is slower than the dynamic of angular velocity. This control structure by controlling the torque of DC traction motors, tries to track the desired velocity profile to access the predefined braking distance and to control the slip ratio. Simulation results are employed to show the effectiveness of the introduced methodology in dynamic braking system.Keywords: cascade control, dynamic braking system, DC traction motors, slip control
Procedia PDF Downloads 367479 Use of a New Multiplex Quantitative Polymerase Chain Reaction Based Assay for Simultaneous Detection of Neisseria Meningitidis, Escherichia Coli K1, Streptococcus agalactiae, and Streptococcus pneumoniae
Authors: Nastaran Hemmati, Farhad Nikkhahi, Amir Javadi, Sahar Eskandarion, Seyed Mahmuod Amin Marashi
Abstract:
Neisseria meningitidis, Escherichia coli K, Streptococcus agalactiae, and Streptococcus pneumoniae cause 90% of bacterial meningitis. Almost all infected people die or have irreversible neurological complications. Therefore, it is essential to have a diagnostic kit with the ability to quickly detect these fatal infections. The project involved 212 patients from whom cerebrospinal fluid samples were obtained. After total genome extraction and performing multiplex quantitative polymerase chain reaction (qPCR), the presence or absence of each infectious factor was determined by comparing with standard strains. The specificity, sensitivity, positive predictive value, and negative predictive value calculated were 100%, 92.9%, 50%, and 100%, respectively. So, due to the high specificity and sensitivity of the designed primers, they can be used instead of bacterial culture that takes at least 24 to 48 hours. The remarkable benefit of this method is associated with the speed (up to 3 hours) at which the procedure could be completed. It is also worth noting that this method can reduce the personnel unintentional errors which may occur in the laboratory. On the other hand, as this method simultaneously identifies four common factors that cause bacterial meningitis, it could be used as an auxiliary method diagnostic technique in laboratories particularly in cases of emergency medicine.Keywords: cerebrospinal fluid, meningitis, quantitative polymerase chain reaction, simultaneous detection, diagnosis testing
Procedia PDF Downloads 123478 Quantification and Evaluation of Tumors Heterogeneity Utilizing Multimodality Imaging
Authors: Ramin Ghasemi Shayan, Morteza Janebifam
Abstract:
Tumors are regularly inhomogeneous. Provincial varieties in death, metabolic action, multiplication and body part are watched. There’s expanding proof that strong tumors may contain subpopulations of cells with various genotypes and phenotypes. These unmistakable populaces of malignancy cells can connect during a serious way and may contrast in affectability to medications. Most tumors show organic heterogeneity1–3 remembering heterogeneity for genomic subtypes, varieties inside the statement of development variables and genius, and hostile to angiogenic factors4–9 and varieties inside the tumoural microenvironment. These can present as contrasts between tumors in a few people. for instance, O6-methylguanine-DNA methyltransferase, a DNA fix compound, is hushed by methylation of the quality advertiser in half of glioblastoma (GBM), adding to chemosensitivity, and improved endurance. From the outset, there includes been specific enthusiasm inside the usage of dissemination weighted imaging (DWI) and dynamic complexity upgraded MRI (DCE-MRI). DWI sharpens MRI to water dispersion inside the extravascular extracellular space (EES) and is wiped out with the size and setup of the cell populace. Additionally, DCE-MRI utilizes dynamic obtaining of pictures during and after the infusion of intravenous complexity operator. Signal changes are additionally changed to outright grouping of differentiation permitting examination utilizing pharmacokinetic models. PET scan modality gives one of a kind natural particularity, permitting dynamic or static imaging of organic atoms marked with positron emanating isotopes (for example, 15O, 18F, 11C). The strategy is explained to a colossal radiation portion, which points of confinement rehashed estimations, particularly when utilized together with PC tomography (CT). At long last, it's of incredible enthusiasm to quantify territorial hemoglobin state, which could be joined with DCE-CT vascular physiology estimation to create significant experiences for understanding tumor hypoxia.Keywords: heterogeneity, computerized tomography scan, magnetic resonance imaging, PET
Procedia PDF Downloads 155477 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis
Procedia PDF Downloads 226476 Apollo Clinical Excellence Scorecard (ACE@25): An Initiative to Drive Quality Improvement in Hospitals
Authors: Anupam Sibal
Abstract:
Whatever is measured tends to improve. With a view to objectively measuring and improving clinical quality across the Apollo Group Hospitals, the initiative of ACE @ 25 (Apollo Clinical Excellence@25) was launched on Jan 09. ACE @ 25 is a clinically balanced scorecard incorporating 25 clinical quality parameters involving complication rates, mortality rates, one-year survival rates and average length of stay after major procedures like liver and renal transplant, CABG, TKR, THR, TURP, PTCA, endoscopy, large bowel resection and MRM covering all major specialties. Also included are hospital acquired infection rates, pain satisfaction and medication errors. Benchmarks have been chosen from the world’s best hospitals. There are weighted scores for outcomes color coded green, orange and red. The cumulative score is 100. Data is reported monthly by 43 Group Hospitals online on the Lighthouse platform. Action taken reports for parameters falling in red are submitted quarterly and reviewed by the board. An audit team audits the data at all locations every six months. Scores are linked to appraisal of the medical head and there is an “ACE @ 25” Champion Award for the highest scorer. Scores for different parameters were variable from green to red at the start of the initiative. Most hospitals showed an improvement in scores over the last four years for parameters where they had showed scores in red or orange at the start of the initiative. The overall scores for the group have shown an increase from 72 in 2010 to 81 in 2015.Keywords: benchmarks, clinical quality, lighthouse, platform, scores
Procedia PDF Downloads 308475 Development of Residual Power Series Methods for Efficient Solutions of Stiff Differential Equations
Authors: Gebreegziabher Hailu
Abstract:
This paper presents the development of residual power series methods aimed at efficiently solving stiff differential equations, which pose significant challenges in numerical analysis due to their rapid changes in solution behavior. The RPSM is a numerical approach that generates polynomial-based approximate solutions without the need for linearization, discretization, or perturbation techniques, making it straightforward to implement and less prone to computational errors. We introduce an approach that utilizes power series expansions combined with residual minimization techniques to enhance convergence and stability. By analyzing the theoretical foundations of stiffness, we delve into the formulation of the residual power series method, detailing how it effectively captures the dynamics of stiff systems while maintaining computational efficiency. Numerical experiments demonstrate the method's superiority in terms of accuracy and computational cost when compared to traditional methods like implicit Runge-Kutta or multistep techniques. We also explore adaptive strategies within our framework to automatically adjust parameters based on the stiffness characteristics of the problem at hand. Ultimately, our findings contribute to the broader toolkit for tackling stiff differential equations, offering a robust alternative that promises to streamline computational workflows in various applied mathematics and engineering contexts.Keywords: residual power series methods, stiff differential equoations, numerical approach, Runge Kutta methods
Procedia PDF Downloads 30474 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 86