Search results for: discrete event simulation
4561 Screen Method of Distributed Cooperative Navigation Factors for Unmanned Aerial Vehicle Swarm
Authors: Can Zhang, Qun Li, Yonglin Lei, Zhi Zhu, Dong Guo
Abstract:
Aiming at the problem of factor screen in distributed collaborative navigation of dense UAV swarm, an efficient distributed collaborative navigation factor screen method is proposed. The method considered the balance between computing load and positioning accuracy. The proposed algorithm utilized the factor graph model to implement a distributed collaborative navigation algorithm. The GNSS information of the UAV itself and the ranging information between the UAVs are used as the positioning factors. In this distributed scheme, a local factor graph is established for each UAV. The positioning factors of nodes with good geometric position distribution and small variance are selected to participate in the navigation calculation. To demonstrate and verify the proposed methods, the simulation and experiments in different scenarios are performed in this research. Simulation results show that the proposed scheme achieves a good balance between the computing load and positioning accuracy in the distributed cooperative navigation calculation of UAV swarm. This proposed algorithm has important theoretical and practical value for both industry and academic areas.Keywords: screen method, cooperative positioning system, UAV swarm, factor graph, cooperative navigation
Procedia PDF Downloads 804560 Classification of Contexts for Mentioning Love in Interviews with Victims of the Holocaust
Authors: Marina Yurievna Aleksandrova
Abstract:
Research of the Holocaust retains value not only for history but also for sociology and psychology. One of the most important fields of study is how people were coping during and after this traumatic event. The aim of this paper is to identify the main contexts of the topic of love and to determine which contexts are more characteristic for different groups of victims of the Holocaust (gender, nationality, age). In this research, transcripts of interviews with Holocaust victims that were collected during 1946 for the "Voices of the Holocaust" project were used as data. Main contexts were analyzed with methods of network analysis and latent semantic analysis and classified by gender, age, and nationality with random forest. The results show that love is articulated and described significantly differently for male and female informants, nationality is shown results with lower values of quality metrics, as well as the age.Keywords: Holocaust, latent semantic analysis, network analysis, text-mining, random forest
Procedia PDF Downloads 1814559 Economic Evaluation Offshore Wind Project under Uncertainly and Risk Circumstances
Authors: Sayed Amir Hamzeh Mirkheshti
Abstract:
Offshore wind energy as a strategic renewable energy, has been growing rapidly due to availability, abundance and clean nature of it. On the other hand, budget of this project is incredibly higher in comparison with other renewable energies and it takes more duration. Accordingly, precise estimation of time and cost is needed in order to promote awareness in the developers and society and to convince them to develop this kind of energy despite its difficulties. Occurrence risks during on project would cause its duration and cost constantly changed. Therefore, to develop offshore wind power, it is critical to consider all potential risks which impacted project and to simulate their impact. Hence, knowing about these risks could be useful for the selection of most influencing strategies such as avoidance, transition, and act in order to decrease their probability and impact. This paper presents an evaluation of the feasibility of 500 MV offshore wind project in the Persian Gulf and compares its situation with uncertainty resources and risk. The purpose of this study is to evaluate time and cost of offshore wind project under risk circumstances and uncertain resources by using Monte Carlo simulation. We analyzed each risk and activity along with their distribution function and their effect on the project.Keywords: wind energy project, uncertain resources, risks, Monte Carlo simulation
Procedia PDF Downloads 3524558 Wind Diesel Hybrid System without Battery Energy Storage Using Imperialist Competitive Algorithm
Authors: H. Rezvani, H. Monsef, A. Hekmati
Abstract:
Nowadays, the use of renewable energy sources has been increasingly great because of the cost increase and public demand for clean energy sources. One of the fastest growing sources is wind energy. In this paper, Wind Diesel Hybrid System (WDHS) comprising a Diesel Generator (DG), a Wind Turbine Generator (WTG), the Consumer Load, a Battery-based Energy Storage System (BESS), and a Dump Load (DL) is used. Voltage is controlled by Diesel Generator; the frequency is controlled by BESS and DL. The BESS elimination is an efficient way to reduce maintenance cost and increase the dynamic response. Simulation results with graphs for the frequency of Power System, active power, and the battery power are presented for load changes. The controlling parameters are optimized by using Imperialist Competitive Algorithm (ICA). The simulation results for the BESS/no BESS cases are compared. Results show that in no BESS case, the frequency control is more optimal than the BESS case by using ICA.Keywords: renewable energy, wind diesel system, induction generator, energy storage, imperialist competitive algorithm
Procedia PDF Downloads 5604557 The Development of an Anaesthetic Crisis Manual for Acute Critical Events: A Pilot Study
Authors: Jacklyn Yek, Clara Tong, Shin Yuet Chong, Yee Yian Ong
Abstract:
Background: While emergency manuals and cognitive aids (CA) have been used in high-hazard industries for decades, this has been a nascent field in healthcare. CAs can potentially offset the large cognitive load involved in crisis resource management and possibly facilitate the efficient performance of key steps in treatment. A crisis manual was developed based on local guidelines and the latest evidence-based information and introduced to a tertiary hospital setting in Singapore. Hence, the objective of this study is to evaluate the effectiveness of the crisis manual in guiding response and management of critical events. Methods: 7 surgical teams were recruited to participate in a series of simulated emergencies in high-fidelity operating room simulator over the period of April to June 2018. All teams consisted of a surgical consultant and medical officer/registrar, anesthesia consultant and medical officer/registrar; as well as a circulating, scrub and anesthetic nurse. Each team performed a simulated operation in which 1 or more of the crisis events occurred. The teams were randomly assigned to a scenario of the crisis manual and all teams were deemed to be equal in experience and knowledge. Before the simulation, teams were instructed on proper checklist use but the use of the checklist was optional. Results: 7 simulation sessions were performed, consisting of the following scenarios: Airway fire, Massive Transfusion Protocol, Malignant Hyperthermia, Eclampsia, and Difficult Airway. Out of the 7 surgical teams, 2 teams made use of the crisis manual – of which both teams had encountered a ‘Malignant Hyperthermia’ scenario. These team members reflected that the crisis manual assisted allowed them to work in a team, especially being able to involve the surgical doctors who were unfamiliar with the condition and management. A run chart plotted showed a possible upward trend, suggesting that with increasing awareness and training, staff would become more likely to initiate the use of the crisis manual. Conclusion: Despite the high volume load in this tertiary hospital, certain crises remain rare and clinicians are often caught unprepared. A crisis manual is an effective tool and easy-to-use repository that can improve patient outcome and encourage teamwork. With training, familiarity would allow clinicians to be increasingly comfortable with reaching out for the crisis manual. More simulation training would need to be conducted to determine its effectiveness.Keywords: crisis resource management, high fidelity simulation training, medical errors, visual aids
Procedia PDF Downloads 1274556 Study on Planning of Smart GRID Using Landscape Ecology
Authors: Sunglim Lee, Susumu Fujii, Koji Okamura
Abstract:
Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation
Procedia PDF Downloads 4444555 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 3784554 Finite Element Simulation of an Offshore Monopile Subjected to Cyclic Loading Using Hypoplasticity with Intergranular Strain Anisotropy (ISA) for the Soil
Authors: William Fuentes, Melany Gil
Abstract:
Numerical simulations of offshore wind turbines (OWTs) in shallow waters demand sophisticated models considering the cyclic nature of the environmental loads. For the case of an OWT founded on sands, rapid loading may cause a reduction of the effective stress of the soil surrounding the structure. This eventually leads to its settlement, tilting, or other issues affecting its serviceability. In this work, a 3D FE model of an OWT founded on sand is constructed and analyzed. Cyclic loading with different histories is applied at certain points of the tower to simulate some environmental forces. The mechanical behavior of the soil is simulated through the recently proposed ISA-hypoplastic model for sands. The Intergranular Strain Anisotropy ISA can be interpreted as an enhancement of the intergranular strain theory, often used to extend hypoplastic formulations for the simulation of cyclic loading. In contrast to previous formulations, the proposed constitutive model introduces an elastic range for small strain amplitudes, includes the cyclic mobility effect and is able to capture the cyclic behavior of sands under a larger number of cycles. The model performance is carefully evaluated on the FE dynamic analysis of the OWT.Keywords: offshore wind turbine, monopile, ISA, hypoplasticity
Procedia PDF Downloads 2464553 Parametrical Simulation of Sheet Metal Forming Process to Control the Localized Thinning
Authors: Hatem Mrad, Alban Notin, Mohamed Bouazara
Abstract:
Sheet metal forming process has a multiple successive steps starting from sheets fixation to sheets evacuation. Often after forming operation, the sheet has defects requiring additional corrections steps. For example, in the drawing process, the formed sheet may have several defects such as springback, localized thinning and bends. All these defects are directly dependent on process, geometric and material parameters. The prediction and elimination of these defects requires the control of most sensitive parameters. The present study is concerned with a reliable parametric study of deep forming process in order to control the localized thinning. The proposed approach will be based on stochastic finite element method. Especially, the polynomial Chaos development will be used to establish a reliable relationship between input (process, geometric and material parameters) and output variables (sheet thickness). The commercial software Abaqus is used to conduct numerical finite elements simulations. The automatized parametrical modification is provided by coupling a FORTRAN routine, a PYTHON script and input Abaqus files.Keywords: sheet metal forming, reliability, localized thinning, parametric simulation
Procedia PDF Downloads 4234552 Using Power Flow Analysis for Understanding UPQC’s Behaviors
Authors: O. Abdelkhalek, A. Naimi, M. Rami, M. N. Tandjaoui, A. Kechich
Abstract:
This paper deals with the active and reactive power flow analysis inside the unified power quality conditioner (UPQC) during several cases. The UPQC is a combination of shunt and series active power filter (APF). It is one of the best solutions towards the mitigation of voltage sags and swells problems on distribution network. This analysis can provide the helpful information to well understanding the interaction between the series filter, the shunt filter, the DC bus link and electrical network. The mathematical analysis is based on active and reactive power flow through the shunt and series active power filter. Wherein series APF can absorb or deliver the active power to mitigate a swell or sage voltage where in the both cases it absorbs a small reactive power quantity whereas the shunt active power absorbs or releases the active power for stabilizing the storage capacitor’s voltage as well as the power factor correction. The voltage sag and voltage swell are usually interpreted through the DC bus voltage curves. These two phenomena are introduced in this paper with a new interpretation based on the active and reactive power flow analysis inside the UPQC. For simplifying this study, a linear load is supposed in this digital simulation. The simulation results are carried out to confirm the analysis done.Keywords: UPQC, Power flow analysis, shunt filter, series filter.
Procedia PDF Downloads 5724551 3D Numerical Investigation of Asphalt Pavements Behaviour Using Infinite Elements
Authors: K. Sandjak, B. Tiliouine
Abstract:
This article presents the main results of three-dimensional (3-D) numerical investigation of asphalt pavement structures behaviour using a coupled Finite Element-Mapped Infinite Element (FE-MIE) model. The validation and numerical performance of this model are assessed by confronting critical pavement responses with Burmister’s solution and FEM simulation results for multi-layered elastic structures. The coupled model is then efficiently utilised to perform 3-D simulations of a typical asphalt pavement structure in order to investigate the impact of two tire configurations (conventional dual and new generation wide-base tires) on critical pavement response parameters. The numerical results obtained show the effectiveness and the accuracy of the coupled (FE-MIE) model. In addition, the simulation results indicate that, compared with conventional dual tire assembly, single wide base tire caused slightly greater fatigue asphalt cracking and subgrade rutting potentials and can thus be utilised in view of its potential to provide numerous mechanical, economic, and environmental benefits.Keywords: 3-D numerical investigation, asphalt pavements, dual and wide base tires, Infinite elements
Procedia PDF Downloads 2154550 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 1074549 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description
Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu
Abstract:
Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.Keywords: runoff, roughness coefficient, PAR, WRM model
Procedia PDF Downloads 3784548 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification
Procedia PDF Downloads 3494547 Memorializing the Holocaust in the Present Century
Authors: Mehak Burza
Abstract:
As we pause to observe the Holocaust Remembrance Day each year on 27 January, it becomes important to consider how the Holocaust is witnessed, and its education is perceived across the globe. The dissemination of knowledge of the Holocaust becomes more pertinent in the countries that were not directly affected by it. The Holocaust education is not widespread in Asian countries and is thus not mandatory as an academic discipline for school and university students. One such Asian country that often considers Holocaust as an isolated event is India. Though the struggle for freedom began with the 1857 mutiny (the first war of Indian independence) but the freedom revolts gained momentum specifically during the years 1944-1947, when India was steeped in a battery of rebellions. However, freedom for the Indian subcontinent from the domination of British Raj came at the cost of partition of India that resulted in widespread bloodshed and immigration. For India, it is this backdrop of her freedom struggle that always outweighs the incidents of the Second World War, including the catastrophic event of the Holocaust. As a result, the knowledge about the Holocaust is available through secondary sources such as Holocaust documentaries and movies. Besides Anne Frank’s diary, the knowledge about the Holocaust is disseminated through the course readings in the universities. The most common literary acquaintances with the Jewish faith for university students are when they come across the Jewish characters in their course readings. The Prioress’s Tale in Geoffrey Chaucer’s Canterbury Tales, the character of Shylock in William Shakespeare’s The Merchant of Venice, and the Jewish protagonist, Barabas, in Christopher Marlow’s Jew of Malta. Apart from this, the school textbooks mention a detailed chapter on Holocaust and Hitler, which is an encouraging turn. However, there still exists a yawning gap between dissemination and sensitization of Holocaust education owing to different geographical locales. My paper presentation aims to trace the intersectional elements between India and the Holocaust that can serve as the required pivotal stand-board to foster sensitization towards Holocaust education in the Indian subcontinent. For instance, Maharaja Jam SahebDigvijaysinhjiRanjitsinhji, the ruler of Nawanagar, a princely state in British India, helped save thousand Polish Jewish children in 1945 at the time when India herself was steeped in its struggle for freedom. Famously known as the ‘Indian Oskar Schindler’ Polish government has named a street after him in Krakow, Poland. Another example that deserves mention is the spy princess, Noor Inayat Khan, a descendent of Tipu Sultan, who became the most celebrated British spyand fought against the Nazis. Additionally, by offering refuge to Jews, India has proved to be a distant haven for them. Researching further the domain of Jewish refugees in India will not only illuminate a dull/gray zone of investigation but also enable the educators to provide appropriate entry points for introducing the subject of Shoah/Holocaust in India, a subject which unfortunately hitherto is either seldom discussed or is equated with the Partition of India.Keywords: awareness, dissemination, holocaust, India
Procedia PDF Downloads 1374546 Preparation and Evaluation of Zidovudine Nanoparticles
Authors: D. R. Rama Brahma Reddy, A. Vijaya Sarada Reddy
Abstract:
Nanoparticles represent a promising drug delivery system of controlled and targeted drug release. They are specially designed to release the drug in the vicinity of target tissue. The aim of this study was to prepare and evaluate polymethacrylic acid nanoparticles containing Zidovudine in different drug to polymer ratio by nanoprecipitation method. SEM indicated that nanoparticles have a discrete spherical structure without aggregation. The average particle size was found to be 120 ± 0.02 - 420 ± 0.05 nm. The particle size of the nanoparticles was gradually increased with increase in the proportion of polymethacrylic acid polymer. The drug content of the nanoparticles was increasing on increasing polymer concentration up to a particular concentration. No appreciable difference was observed in the extent of degradation of product during 60 days in which, nanoparticles were stored at various temperatures. FT-IR studies indicated that there was no chemical interaction between drug and polymer and stability of drug. The in-vitro release behavior from all the drug loaded batches was found to be zero order and provided sustained release over a period of 24 h. The developed formulation overcome and alleviates the drawbacks and limitations of Zidovudine sustained release formulations and could possibility be advantageous in terms of increased bio availability of Zidovudine.Keywords: nanoparticles, zidovudine, biodegradable, polymethacrylic acid
Procedia PDF Downloads 6264545 Set-point Performance Evaluation of Robust Back-Stepping Control Design for a Nonlinear Electro-Hydraulic Servo System
Authors: Maria Ahmadnezhad, Seyedgharani Ghoreishi
Abstract:
Electrohydraulic servo system have been used in industry in a wide number of applications. Its dynamics are highly nonlinear and also have large extent of model uncertainties and external disturbances. In this thesis, a robust back-stepping control (RBSC) scheme is proposed to overcome the problem of disturbances and system uncertainties effectively and to improve the set-point performance of EHS systems. In order to implement the proposed control scheme, the system uncertainties in EHS systems are considered as total leakage coefficient and effective oil volume. In addition, in order to obtain the virtual controls for stabilizing system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system using Matlab/Simulink Software is executed. From the computer simulation, it was found that the RBSC system produces the desired set-point performance and has robustness to the disturbances and system uncertainties of EHS systems.Keywords: electro hydraulic servo system, back-stepping control, robust back-stepping control, Lyapunov redesign
Procedia PDF Downloads 10044544 Model Based Design of Fly-by-Wire Flight Controls System of a Fighter Aircraft
Authors: Nauman Idrees
Abstract:
Modeling and simulation during the conceptual design phase are the most effective means of system testing resulting in time and cost savings as compared to the testing of hardware prototypes, which are mostly not available during the conceptual design phase. This paper uses the model-based design (MBD) method in designing the fly-by-wire flight controls system of a fighter aircraft using Simulink. The process begins with system definition and layout where modeling requirements and system components were identified, followed by hierarchical system layout to identify the sequence of operation and interfaces of system with external environment as well as the internal interface between the components. In the second step, each component within the system architecture was modeled along with its physical and functional behavior. Finally, all modeled components were combined to form the fly-by-wire flight controls system of a fighter aircraft as per system architecture developed. The system model developed using this method can be simulated using any simulation software to ensure that desired requirements are met even without the development of a physical prototype resulting in time and cost savings.Keywords: fly-by-wire, flight controls system, model based design, Simulink
Procedia PDF Downloads 1184543 Detecting Overdispersion for Mortality AIDS in Zero-inflated Negative Binomial Death Rate (ZINBDR) Co-infection Patients in Kelantan
Authors: Mohd Asrul Affedi, Nyi Nyi Naing
Abstract:
Overdispersion is present in count data, and basically when a phenomenon happened, a Negative Binomial (NB) is commonly used to replace a standard Poisson model. Analysis of count data event, such as mortality cases basically Poisson regression model is appropriate. Hence, the model is not appropriate when existing a zero values. The zero-inflated negative binomial model is appropriate. In this article, we modelled the mortality cases as a dependent variable by age categorical. The objective of this study to determine existing overdispersion in mortality data of AIDS co-infection patients in Kelantan.Keywords: negative binomial death rate, overdispersion, zero-inflation negative binomial death rate, AIDS
Procedia PDF Downloads 4634542 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 2454541 Stabilization of Rotational Motion of Spacecrafts Using Quantized Two Torque Inputs Based on Random Dither
Authors: Yusuke Kuramitsu, Tomoaki Hashimoto, Hirokazu Tahara
Abstract:
The control problem of underactuated spacecrafts has attracted a considerable amount of interest. The control method for a spacecraft equipped with less than three control torques is useful when one of the three control torques had failed. On the other hand, the quantized control of systems is one of the important research topics in recent years. The random dither quantization method that transforms a given continuous signal to a discrete signal by adding artificial random noise to the continuous signal before quantization has also attracted a considerable amount of interest. The objective of this study is to develop the control method based on random dither quantization method for stabilizing the rotational motion of a rigid spacecraft with two control inputs. In this paper, the effectiveness of random dither quantization control method for the stabilization of rotational motion of spacecrafts with two torque inputs is verified by numerical simulations.Keywords: spacecraft control, quantized control, nonlinear control, random dither method
Procedia PDF Downloads 1804540 Near Shore Wave Manipulation for Electricity Generation
Authors: K. D. R. Jagath-Kumara, D. D. Dias
Abstract:
The sea waves carry thousands of GWs of power globally. Although there are a number of different approaches to harness offshore energy, they are likely to be expensive, practically challenging and vulnerable to storms. Therefore, this paper considers using the near shore waves for generating mechanical and electrical power. It introduces two new approaches, the wave manipulation and using a variable duct turbine, for intercepting very wide wave fronts and coping with the fluctuations of the wave height and the sea level, respectively. The first approach effectively allows capturing much more energy yet with a much narrower turbine rotor. The second approach allows using a rotor with a smaller radius but captures energy of higher wave fronts at higher sea levels yet preventing it from totally submerging. To illustrate the effectiveness of the approach, the paper contains a description and the simulation results of a scale model of a wave manipulator. Then, it includes the results of testing a physical model of the manipulator and a single duct, axial flow turbine, in a wave flume in the laboratory. The paper also includes comparisons of theoretical predictions, simulation results and wave flume tests with respect to the incident energy, loss in wave manipulation, minimal loss, brake torque and the angular velocity.Keywords: near-shore sea waves, renewable energy, wave energy conversion, wave manipulation
Procedia PDF Downloads 4834539 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties
Authors: Sonal Budhiraja, Biswabrata Pradhan
Abstract:
This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval
Procedia PDF Downloads 2504538 A Study on the False Alarm Rates of MEWMA and MCUSUM Control Charts When the Parameters Are Estimated
Authors: Umar Farouk Abbas, Danjuma Mustapha, Hamisu Idi
Abstract:
It is now a known fact that quality is an important issue in manufacturing industries. A control chart is an integrated and powerful tool in statistical process control (SPC). The mean µ and standard deviation σ parameters are estimated. In general, the multivariate exponentially weighted moving average (MEWMA) and multivariate cumulative sum (MCUSUM) are used in the detection of small shifts in joint monitoring of several correlated variables; the charts used information from past data which makes them sensitive to small shifts. The aim of the paper is to compare the performance of Shewhart xbar, MEWMA, and MCUSUM control charts in terms of their false rates when parameters are estimated with autocorrelation. A simulation was conducted in R software to generate the average run length (ARL) values of each of the charts. After the analysis, the results show that a comparison of the false alarm rates of the charts shows that MEWMA chart has lower false alarm rates than the MCUSUM chart at various levels of parameter estimated to the number of ARL0 (in control) values. Also noticed was that the sample size has an advert effect on the false alarm of the control charts.Keywords: average run length, MCUSUM chart, MEWMA chart, false alarm rate, parameter estimation, simulation
Procedia PDF Downloads 2224537 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach
Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani
Abstract:
Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery
Procedia PDF Downloads 3074536 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors
Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein
Abstract:
We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.Keywords: control, decentralized, gathering, multi-agent, simple sensors
Procedia PDF Downloads 1644535 NanoSat MO Framework: Simulating a Constellation of Satellites with Docker Containers
Authors: César Coelho, Nikolai Wiegand
Abstract:
The advancement of nanosatellite technology has opened new avenues for cost-effective and faster space missions. The NanoSat MO Framework (NMF) from the European Space Agency (ESA) provides a modular and simpler approach to the development of flight software and operations of small satellites. This paper presents a methodology using the NMF together with Docker for simulating constellations of satellites. By leveraging Docker containers, the software environment of individual satellites can be easily replicated within a simulated constellation. This containerized approach allows for rapid deployment, isolation, and management of satellite instances, facilitating comprehensive testing and development in a controlled setting. By integrating the NMF lightweight simulator in the container, a comprehensive simulation environment was achieved. A significant advantage of using Docker containers is their inherent scalability, enabling the simulation of hundreds or even thousands of satellites with minimal overhead. Docker's lightweight nature ensures efficient resource utilization, allowing for deployment on a single host or across a cluster of hosts. This capability is crucial for large-scale simulations, such as in the case of mega-constellations, where multiple traditional virtual machines would be impractical due to their higher resource demands. This ability for easy horizontal scaling based on the number of simulated satellites provides tremendous flexibility to different mission scenarios. Our results demonstrate that leveraging Docker containers with the NanoSat MO Framework provides a highly efficient and scalable solution for simulating satellite constellations, offering not only significant benefits in terms of resource utilization and operational flexibility but also enabling testing and validation of ground software for constellations. The findings underscore the importance of taking advantage of already existing technologies in computer science to create new solutions for future satellite constellations in space.Keywords: containerization, docker containers, NanoSat MO framework, satellite constellation simulation, scalability, small satellites
Procedia PDF Downloads 504534 Design and Simulation of Low Threshold Nanowire Photonic Crystal Surface Emitting Lasers
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Nanowire based Photonic Crystal Surface Emitting Lasers (PCSELs) reported in the literature have been designed using a triangular, square or honeycomb patterns. The triangular and square pattern PCSELs have limited degrees of freedom in tuning the design parameters which hinders the ability to design high quality factor (Q-factor) devices. Nanowire based PCSELs designed using triangular and square patterns have been reported with the lasing thresholds of 130 kW/〖cm〗^2 and 7 kW/〖cm〗^2 respectively. On the other hand the honeycomb pattern gives more degrees of freedom in tuning the design parameters, which can allow one to design high Q-factor devices. A deformed honeycomb pattern device was reported with lasing threshold of 6.25 W/〖cm〗^2 corresponding to a simulated Q-factor of 5.84X〖10〗^5.Despite this achievement, the design principles which can lead to realization of even higher Q-factor honeycomb pattern PCSELs have not yet been investigated. In this work we show that through deforming the honeycomb pattern and tuning the heigh and lattice constants of the nanowires, it is possible to achieve even higher Q-factor devices. Considering three different band edge modes, we investigate how the resonance wavelength changes as the device is deformed, which is useful in designing high Q-factor devices in different wavelength bands. We eventually establish the design and simulation of honeycomb PCSELs operating around the wavelength of 960nm , in the O and the C band with Q-factors up to 7X〖10〗^7. We also investigate the Q-factors of undeformed device, and establish that the mode at the band edge close to 960nm can attain highest Q-factor of all the modes when the device is undeformed and the Q-factor degrades as the device is deformed. This work is a stepping stone towards the fabrication of very high Q-factor, nanowire based honey comb PCSELs, which are expected to have very low lasing threshold.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers
Procedia PDF Downloads 114533 Numerical Simulation of Production of Microspheres from Polymer Emulsion in Microfluidic Device toward Using in Drug Delivery Systems
Authors: Nizar Jawad Hadi, Sajad Abd Alabbas
Abstract:
Because of their ability to encapsulate and release drugs in a controlled manner, microspheres fabricated from polymer emulsions using microfluidic devices have shown promise for drug delivery applications. In this study, the effects of velocity, density, viscosity, and surface tension, as well as channel diameter, on microsphere generation were investigated using Fluent Ansys software. The software was programmed with the physical properties of the polymer emulsion such as density, viscosity and surface tension. Simulation will then be performed to predict fluid flow and microsphere production and improve the design of drug delivery applications based on changes in these parameters. The effects of capillary and Weber numbers are also studied. The results of the study showed that the size of the microspheres can be controlled by adjusting the speed and diameter of the channel. Narrower microspheres resulted from narrower channel widths and higher flow rates, which could improve drug delivery efficiency, while smaller microspheres resulted from lower interfacial surface tension. The viscosity and density of the polymer emulsion significantly affected the size of the microspheres, ith higher viscosities and densities producing smaller microspheres. The loading and drug release properties of the microspheres created with the microfluidic technique were also predicted. The results showed that the microspheres can efficiently encapsulate drugs and release them in a controlled manner over a period of time. This is due to the high surface area to volume ratio of the microspheres, which allows for efficient drug diffusion. The ability to tune the manufacturing process using factors such as speed, density, viscosity, channel diameter, and surface tension offers a potential opportunity to design drug delivery systems with greater efficiency and fewer side effects.Keywords: polymer emulsion, microspheres, numerical simulation, microfluidic device
Procedia PDF Downloads 654532 The Influence of Fiber Volume Fraction on Thermal Conductivity of Pultruded Profile
Authors: V. Lukášová, P. Peukert, V. Votrubec
Abstract:
Thermal conductivity in the x, y and z-directions was measured on a pultruded profile that was manufactured by the technology of pulling from glass fibers and a polyester matrix. The results of measurements of thermal conductivity showed considerable variability in different directions. The caused variability in thermal conductivity was expected due fraction variations. The cross-section of the pultruded profile was scanned. An image analysis illustrated an uneven distribution of the fibers and the matrix in the cross-section. The distribution of these inequalities was processed into a Voronoi diagram in the observed area of the pultruded profile cross-section. In order to verify whether the variation of the fiber volume fraction in the pultruded profile can affect its thermal conductivity, the numerical simulations in the ANSYS Fluent were performed. The simulation was based on the geometry reconstructed from image analysis. The aim is to quantify thermal conductivity numerically. Above all, images with different volume fractions were chosen. The results of the measured thermal conductivity were compared with the calculated thermal conductivity. The evaluated data proved a strong correlation between volume fraction and thermal conductivity of the pultruded profile. Based on presented results, a modification of production technology may be proposed.Keywords: pultrusion profile, volume fraction, thermal conductivity, numerical simulation
Procedia PDF Downloads 346