Search results for: method time measurement
33445 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints
Authors: Zongjie Wang, Zhizhong Guo
Abstract:
While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.Keywords: optimal power flow, time period, security, economy
Procedia PDF Downloads 44933444 Investigation of Factors Affecting the Total Ionizing Dose Threshold of Electrically Erasable Read Only Memories for Use in Dose Rate Measurement
Authors: Liqian Li, Yu Liu, Karen Colins
Abstract:
The dose rate present in a seriously contaminated area can be indirectly determined by monitoring radiation damage to inexpensive commercial electronics, instead of deploying expensive radiation hardened sensors. EEPROMs (Electrically Erasable Read Only Memories) are a good candidate for this purpose because they are inexpensive and are sensitive to radiation exposure. When the total ionizing dose threshold is reached, an EEPROM chip will show signs of damage that can be monitored and transmitted by less susceptible electronics. The dose rate can then be determined from the known threshold dose and the exposure time, assuming the radiation field remains constant with time. Therefore, the threshold dose needs to be well understood before this method can be used. There are many factors affecting the threshold dose, such as the gamma ray energy spectrum, the operating voltage, etc. The purpose of this study was to experimentally determine how the threshold dose depends on dose rate, temperature, voltage, and duty factor. It was found that the duty factor has the strongest effect on the total ionizing dose threshold, while the effect of the other three factors that were investigated is less significant. The effect of temperature was found to be opposite to that expected to result from annealing and is yet to be understood.Keywords: EEPROM, ionizing radiation, radiation effects on electronics, total ionizing dose, wireless sensor networks
Procedia PDF Downloads 18333443 Comparison of Loosely Coupled and Tightly Coupled INS/GNSS Architecture for Guided Rocket Navigation System
Authors: Rahmat Purwoko, Bambang Riyanto Trilaksono
Abstract:
This paper gives comparison of INS/GNSS architecture namely Loosely Coupled and Tightly Coupled using Hardware in the Loop Simulation in Guided Missile RKX-200 rocket model. INS/GNSS Tightly Coupled architecture requires pseudo-range, pseudo-range rate, and position and velocity of each satellite in constellation from GPS (Global Positioning System) measurement. The Loosely Coupled architecture use estimated position and velocity from GNSS receiver. INS/GNSS architecture also requires angular rate and specific force measurement from IMU (Inertial Measurement Unit). Loosely Coupled arhitecture designed using 15 states Kalman Filter and Tightly Coupled designed using 17 states Kalman Filter. Integration algorithm calculation using ECEF frame. Navigation System implemented Zedboard All Programmable SoC.Keywords: kalman filter, loosely coupled, navigation system, tightly coupled
Procedia PDF Downloads 30733442 Modeling Route Selection Using Real-Time Information and GPS Data
Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento
Abstract:
Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.Keywords: behavior choice model, human factors, hybrid model, real time data
Procedia PDF Downloads 15133441 Infusion Pump Historical Development, Measurement and Parts of Infusion Pump
Authors: Samuel Asrat
Abstract:
Infusion pumps have become indispensable tools in modern healthcare, allowing for precise and controlled delivery of fluids, medications, and nutrients to patients. This paper provides an overview of the historical development, measurement, and parts of infusion pumps. The historical development of infusion pumps can be traced back to the early 1960s when the first rudimentary models were introduced. These early pumps were large, cumbersome, and often unreliable. However, advancements in technology and engineering over the years have led to the development of smaller, more accurate, and user-friendly infusion pumps. Measurement of infusion pumps involves assessing various parameters such as flow rate, volume delivered, and infusion duration. Flow rate, typically measured in milliliters per hour (mL/hr), is a critical parameter that determines the rate at which fluids or medications are delivered to the patient. Accurate measurement of flow rate is essential to ensure the proper administration of therapy and prevent adverse effects. Infusion pumps consist of several key parts, including the pump mechanism, fluid reservoir, tubing, and control interface. The pump mechanism is responsible for generating the necessary pressure to push fluids through the tubing and into the patient's bloodstream. The fluid reservoir holds the medication or solution to be infused, while the tubing serves as the conduit through which the fluid travels from the reservoir to the patient. The control interface allows healthcare providers to program and adjust the infusion parameters, such as flow rate and volume. In conclusion, infusion pumps have evolved significantly since their inception, offering healthcare providers unprecedented control and precision in delivering fluids and medications to patients. Understanding the historical development, measurement, and parts of infusion pumps is essential for ensuring their safe and effective use in clinical practice. Procedia PDF Downloads 6233440 Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System
Authors: Won Hyuck Kim, Chang Hwan Kim, Hyun Wook Kim, Myoung Hoon Lee, Chan Hong Park, Hyeon Yeong Park
Abstract:
We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey.Keywords: Anmok, beach survey, Shipborne Mobile LiDAR System, submarine topography
Procedia PDF Downloads 42733439 Review of Dielectric Permittivity Measurement Techniques
Authors: Ahmad H. Abdelgwad, Galal E. Nadim, Tarek M. Said, Amr M. Gody
Abstract:
The prime objective of this manuscript is to provide intensive review of the techniques used for permittivity measurements. The measurement techniques, relevant for any desired application, rely on the nature of the measured dielectric material, both electrically and physically, the degree of accuracy required, and the frequency of interest. Regardless of the way that distinctive sorts of instruments can be utilized, measuring devices that provide reliable determinations of the required electrical properties including the obscure material in the frequency range of interest can be considered. The challenge in making precise dielectric property or permittivity measurements is in designing of the material specimen holder for those measurements (RF and MW frequency ranges) and adequately modeling the circuit for reliable computation of the permittivity from the electrical measurements. If the RF circuit parameters such as the impedance or admittance are estimated appropriately at a certain frequency, the material’s permittivity at this frequency can be estimated by the equations which relate the way in which the dielectric properties of the material affect on the parameters of the circuit.Keywords: dielectric permittivity, free space measurement, waveguide techniques, coaxial probe, cavity resonator
Procedia PDF Downloads 36833438 Mixed Model Sequencing in Painting Production Line
Authors: Unchalee Inkampa, Tuanjai Somboonwiwat
Abstract:
Painting process of automobiles and automobile parts, which is a continuous process based on EDP (Electrode position paint, EDP). Through EDP, all work pieces will be continuously sent to the painting process. Work process can be divided into 2 groups based on the running time: Painting Room 1 and Painting Room 2. This leads to continuous operation. The problem that arises is waiting for workloads onto Painting Room. The grading process EDP to Painting Room is a major problem. Therefore, this paper aim to develop production sequencing method by applying EDP to painting process. It also applied fixed rate launching for painting room and earliest due date (EDD) for EDP process and swap pairwise interchange for waiting time to a minimum of machine. The result found that the developed method could improve painting reduced waiting time, on time delivery, meeting customers wants and improved productivity of painting unit.Keywords: sequencing, mixed model lines, painting process, electrode position paint
Procedia PDF Downloads 41833437 Development of In Situ Permeability Test Using Constant Discharge Method for Sandy Soils
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
The post-rain puddles problem that occurs in the first yard of Prambanan Temple are often disturbing visitor activity. A poodle layer and a drainage system has ever built to avoid such a problem, but puddles still didn’t stop appearing after rain. Permeability parameter needs to be determined by using more simple procedure to find exact method of solution. The instrument modelling were proposed according to the development of field permeability testing instrument. This experiment used proposed Constant Discharge method. Constant Discharge method used a tube poured with constant water flow. The procedure were carried out from unsaturated until saturated soil condition. Volumetric water content (θ) were being monitored by soil moisture measurement device. The results were relationship between k and θ which drawn by numerical approach Van Genutchen model. Parameters θr optimum value obtained from the test was at very dry soil. Coefficient of permeability with a density of 19.8 kN/m3 for unsaturated conditions was in range of 3 x 10-6 cm/sec (Sr= 68 %) until 9.98 x 10-4 cm/sec (Sr= 82 %). The equipment and testing procedure developed in this research was quite effective, simple and easy to be implemented on determining field soil permeability coefficient value of sandy soil. Using constant discharge method in proposed permeability test, value of permeability coefficient under unsaturated condition can be obtained without establish soil water characteristic curve.Keywords: constant discharge method, in situ permeability test, sandy soil, unsaturated conditions
Procedia PDF Downloads 38133436 Implementation of a Lattice Boltzmann Method for Pulsatile Flow with Moment Based Boundary Condition
Authors: Zainab A. Bu Sinnah, David I. Graham
Abstract:
The Lattice Boltzmann Method has been developed and used to simulate both steady and unsteady fluid flow problems such as turbulent flows, multiphase flow and flows in the vascular system. As an example, the study of blood flow and its properties can give a greater understanding of atherosclerosis and the flow parameters which influence this phenomenon. The blood flow in the vascular system is driven by a pulsating pressure gradient which is produced by the heart. As a very simple model of this, we simulate plane channel flow under periodic forcing. This pulsatile flow is essentially the standard Poiseuille flow except that the flow is driven by the periodic forcing term. Moment boundary conditions, where various moments of the particle distribution function are specified, are applied at solid walls. We used a second-order single relaxation time model and investigated grid convergence using two distinct approaches. In the first approach, we fixed both Reynolds and Womersley numbers and varied relaxation time with grid size. In the second approach, we fixed the Womersley number and relaxation time. The expected second-order convergence was obtained for the second approach. For the first approach, however, the numerical method converged, but not necessarily to the appropriate analytical result. An explanation is given for these observations.Keywords: Lattice Boltzmann method, single relaxation time, pulsatile flow, moment based boundary condition
Procedia PDF Downloads 23033435 Progress in Accuracy, Reliability and Safety in Firedamp Detection
Authors: José Luis Lorenzo Bayona, Ljiljana Medic-Pejic, Isabel Amez Arenillas, Blanca Castells Somoza
Abstract:
The communication presents the study results carried out by the Official Laboratory J. M. Madariaga (LOM) of the Polytechnic University of Madrid to analyze the reliability of methane detection systems used in underground mining. Poor firedamp control in work can cause from production stoppages to fatal accidents and since there is currently a great variety of equipment with different functional characteristics, a study is needed to indicate which measurement principles have the highest degree of confidence. For the development of the project, a series of fixed, transportable and portable methane detectors with different measurement principles have been selected to subject them to laboratory tests following the methods described in the applicable regulations. The test equipment has been the one usually used in the certification and calibration of these devices, subject to the LOM quality system, and the tests have been carried out on detectors accessible in the market. The conclusions establish the main advantages and disadvantages of the equipment according to the measurement principle used; catalytic combustion, interferometry and infrared absorption.Keywords: ATEX standards, gas detector, methane meter, mining safety
Procedia PDF Downloads 13633434 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker
Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang
Abstract:
The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).Keywords: inertial navigation, adaptive filtering, star tracker, FOG
Procedia PDF Downloads 7933433 Time-Domain Expressions for Bridge Self-Excited Aerodynamic Forces by Modified Particle Swarm Optimizer
Authors: Hao-Su Liu, Jun-Qing Lei
Abstract:
This study introduces the theory of modified particle swarm optimizer and its application in time-domain expressions for bridge self-excited aerodynamic forces. Based on the indicial function expression and the rational function expression in time-domain expression for bridge self-excited aerodynamic forces, the characteristics of the two methods, i.e. the modified particle swarm optimizer and conventional search method, are compared in flutter derivatives’ fitting process. Theoretical analysis and numerical results indicate that adopting whether the indicial function expression or the rational function expression, the fitting flutter derivatives obtained by modified particle swarm optimizer have better goodness of fit with ones obtained from experiment. As to the flutter derivatives which have higher nonlinearity, the self-excited aerodynamic forces, using the flutter derivatives obtained through modified particle swarm optimizer fitting process, are much closer to the ones simulated by the experimental. The modified particle swarm optimizer was used to recognize the parameters of time-domain expressions for flutter derivatives of an actual long-span highway-railway truss bridge with double decks at the wind attack angle of 0°, -3° and +3°. It was found that this method could solve the bounded problems of attenuation coefficient effectively in conventional search method, and had the ability of searching in unboundedly area. Accordingly, this study provides a method for engineering industry to frequently and efficiently obtain the time-domain expressions for bridge self-excited aerodynamic forces.Keywords: time-domain expressions, bridge self-excited aerodynamic forces, modified particle swarm optimizer, long-span highway-railway truss bridge
Procedia PDF Downloads 31333432 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes
Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia
Abstract:
In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.Keywords: basic modal displacements, earthquake, optimization, spectrum
Procedia PDF Downloads 35833431 Numerical Evolution Methods of Rational Form for Diffusion Equations
Authors: Said Algarni
Abstract:
The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs
Procedia PDF Downloads 29633430 Bright Light Effects on the Concentration and Diffuse Attention Reaction Time, Tension, Angry, Fatigue and Alertness among Shift Workers
Authors: Mohammad Imani, JabraeilNasl Seraji, Abolfazl Zakerian
Abstract:
Background: Reaction time is the amount of time it takes to respond to a stimulus. In fact The time that passes between the introduction of a stimulus and the reaction by the subject to that stimulus. The aim of this interventional study is evaluation of bright light effects on concentration and diffuse attention reaction time, tension, angry, fatigue and alertness among shift workers. There are several incentives that can reduce the reaction time or added. Bright light as one of the environmental factors can reduce reaction time. Material &Method: This cross-sectional descriptive study was conducted in 1391, in 88 subjects (44 Fixed morning worker and 44 shift worker ) In a 24 h time (13-16-19-22-1-4-7-10) in an ordinary light situation after a randomly selected sample size calculation, concentration and diffuse attention test (reaction time) has been done. After intervention and using of bright light (4500lux), again reaction time test was done. After analyzing by ElISA method obtained data were analyzed by statistical software SPSS 19 and using T-test and ANOVA statistical analysis. Results: Between average of reaction time tests in ordinary light exposed to fixed morning workers and bright light exposed to shift worker, with 95% CI, (P>%5) there was no significant relationship. After the intervention and the use of bright light (4500 lux),between average of concentration and diffused attention reaction time tests in ordinary light exposure on the fixed morning workers and bright light exposure shift workers with 95% CI, (P<5%) there was significant relationship. Conclusion: In sometimes of 24 h during ordinary light exposure concentration and diffused attention reaction time has changed in shift workers. After intervention, during bright light (4500lux) exposure as a light shower, focused and diffuse attention reaction time, tension ,angry and fatigue decreased.Keywords: bright light, reaction time, tension, angry, fatigue, alertness
Procedia PDF Downloads 38433429 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 10633428 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc
Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,
Abstract:
Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc
Procedia PDF Downloads 10633427 Development, Optimization, and Validation of a Synchronous Fluorescence Spectroscopic Method with Multivariate Calibration for the Determination of Amlodipine and Olmesartan Implementing: Experimental Design
Authors: Noha Ibrahim, Eman S. Elzanfaly, Said A. Hassan, Ahmed E. El Gendy
Abstract:
Objectives: The purpose of the study is to develop a sensitive synchronous spectrofluorimetric method with multivariate calibration after studying and optimizing the different variables affecting the native fluorescence intensity of amlodipine and olmesartan implementing an experimental design approach. Method: In the first step, the fractional factorial design used to screen independent factors affecting the intensity of both drugs. The objective of the second step was to optimize the method performance using a Central Composite Face-centred (CCF) design. The optimal experimental conditions obtained from this study were; a temperature of (15°C ± 0.5), the solvent of 0.05N HCl and methanol with a ratio of (90:10, v/v respectively), Δλ of 42 and the addition of 1.48 % surfactant providing a sensitive measurement of amlodipine and olmesartan. The resolution of the binary mixture with a multivariate calibration method has been accomplished mainly by using partial least squares (PLS) model. Results: The recovery percentage for amlodipine besylate and atorvastatin calcium in tablets dosage form were found to be (102 ± 0.24, 99.56 ± 0.10, for amlodipine and Olmesartan, respectively). Conclusion: Method is valid according to some International Conference on Harmonization (ICH) guidelines, providing to be linear over a range of 200-300, 500-1500 ng mL⁻¹ for amlodipine and Olmesartan. The methods were successful to estimate amlodipine besylate and olmesartan in bulk powder and pharmaceutical preparation.Keywords: amlodipine, central composite face-centred design, experimental design, fractional factorial design, multivariate calibration, olmesartan
Procedia PDF Downloads 14833426 New Method for Determining the Distribution of Birefringence and Linear Dichroism in Polymer Materials Based on Polarization-Holographic Grating
Authors: Barbara Kilosanidze, George Kakauridze, Levan Nadareishvili, Yuri Mshvenieradze
Abstract:
A new method for determining the distribution of birefringence and linear dichroism in optical polymer materials is presented. The method is based on the use of polarization-holographic diffraction grating that forms an orthogonal circular basis in the process of diffraction of probing laser beam on the grating. The intensities ratio of the orders of diffraction on this grating enables the value of birefringence and linear dichroism in the sample to be determined. The distribution of birefringence in the sample is determined by scanning with a circularly polarized beam with a wavelength far from the absorption band of the material. If the scanning is carried out by probing beam with the wavelength near to a maximum of the absorption band of the chromophore then the distribution of linear dichroism can be determined. An appropriate theoretical model of this method is presented. A laboratory setup was created for the proposed method. An optical scheme of the laboratory setup is presented. The results of measurement in polymer films with two-dimensional gradient distribution of birefringence and linear dichroism are discussed.Keywords: birefringence, linear dichroism, graded oriented polymers, optical polymers, optical anisotropy, polarization-holographic grating
Procedia PDF Downloads 43233425 Feasibility Study of Measurement of Turning Based-Surfaces Using Perthometer, Optical Profiler and Confocal Sensor
Authors: Khavieya Anandhan, Soundarapandian Santhanakrishnan, Vijayaraghavan Laxmanan
Abstract:
In general, measurement of surfaces is carried out by using traditional methods such as contact type stylus instruments. This prevalent approach is challenged by using non-contact instruments such as optical profiler, co-ordinate measuring machine, laser triangulation sensors, machine vision system, etc. Recently, confocal sensor is trying to be used in the surface metrology field. This sensor, such as a confocal sensor, is explored in this study to determine the surface roughness value for various turned surfaces. Turning is a crucial machining process to manufacture products such as grooves, tapered domes, threads, tapers, etc. The roughness value of turned surfaces are in the range of range 0.4-12.5 µm, were taken for analysis. Three instruments were used, namely, perthometer, optical profiler, and confocal sensor. Among these, in fact, a confocal sensor is least explored, despite its good resolution about 5 nm. Thus, such a high-precision sensor was used in this study to explore the possibility of measuring turned surfaces. Further, using this data, measurement uncertainty was also studied.Keywords: confocal sensor, optical profiler, surface roughness, turned surfaces
Procedia PDF Downloads 13233424 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 11333423 Execution Time Optimization of Workflow Network with Activity Lead-Time
Authors: Xiaoping Qiu, Binci You, Yue Hu
Abstract:
The executive time of the workflow network has an important effect on the efficiency of the business process. In this paper, the activity executive time is divided into the service time and the waiting time, then the lead time can be extracted from the waiting time. The executive time formulas of the three basic structures in the workflow network are deduced based on the activity lead time. Taken the process of e-commerce logistics as an example, insert appropriate lead time for key activities by using Petri net, and the executive time optimization model is built to minimize the waiting time with the time-cost constraints. Then the solution program-using VC++6.0 is compiled to get the optimal solution, which reduces the waiting time of key activities in the workflow, and verifies the role of lead time in the timeliness of e-commerce logistics.Keywords: electronic business, execution time, lead time, optimization model, petri net, time workflow network
Procedia PDF Downloads 17133422 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 7833421 Improving Load Frequency Control of Multi-Area Power System by Considering Uncertainty by Using Optimized Type 2 Fuzzy Pid Controller with the Harmony Search Algorithm
Authors: Mehrdad Mahmudizad, Roya Ahmadi Ahangar
Abstract:
This paper presents the method of designing the type 2 fuzzy PID controllers in order to solve the problem of Load Frequency Control (LFC). The Harmony Search (HS) algorithm is used to regulate the measurement factors and the effect of uncertainty of membership functions of Interval Type 2 Fuzzy Proportional Integral Differential (IT2FPID) controllers in order to reduce the frequency deviation resulted from the load oscillations. The simulation results implicitly show that the performance of the proposed IT2FPID LFC in terms of error, settling time and resistance against different load oscillations is more appropriate and preferred than PID and Type 1 Fuzzy Proportional Integral Differential (T1FPID) controllers.Keywords: load frequency control, fuzzy-pid controller, type 2 fuzzy system, harmony search algorithm
Procedia PDF Downloads 27633420 Intelligent Technology for Real-Time Monitor and Data Analysis of the Aquaculture Toxic Water Concentration
Authors: Chin-Yuan Hsieh, Wei-Chun Lu, Yu-Hong Zeng
Abstract:
The situation of a group of fish die is frequently found due to the fish disease caused by the deterioration of aquaculture water quality. The toxic ammonia is produced by animals as a byproduct of protein. The system is designed by the smart sensor technology and developed by the mathematical model to monitor the water parameters 24 hours a day and predict the relationship among twelve water quality parameters for monitoring the water quality in aquaculture. All data measured are stored in cloud server. In productive ponds, the daytime pH may be high enough to be lethal to the fish. The sudden change of the aquaculture conditions often results in the increase of PH value of water, lack of oxygen dissolving content, water quality deterioration and yield reduction. From the real measurement, the system can send the message to user’s smartphone successfully on the bad conditions of water quality. From the data comparisons between measurement and model simulation in fish aquaculture site, the difference of parameters is less than 2% and the correlation coefficient is at least 98.34%. The solubility rate of oxygen decreases exponentially with the elevation of water temperature. The correlation coefficient is 98.98%.Keywords: aquaculture, sensor, ammonia, dissolved oxygen
Procedia PDF Downloads 28133419 Shield Tunnel Excavation Simulation of a Case Study Using a So-Called 'Stress Relaxation' Method
Authors: Shengwei Zhu, Alireza Afshani, Hirokazu Akagi
Abstract:
Ground surface settlement induced by shield tunneling is addressing increasing attention as shield tunneling becomes a popular construction technique for tunnels in urban areas. This paper discusses a 2D longitudinal FEM simulation of a tunneling case study in Japan (Tokyo Metro Yurakucho Line). Tunneling-induced field data was already collected and is used here for comparison and evaluating purposes. In this model, earth pressure, face pressure, backfilling grouting, elastic tunnel lining, and Mohr-Coulomb failure criterion for soil elements are considered. A method called ‘stress relaxation’ is also exploited to simulate the gradual tunneling excavation. Ground surface settlements obtained from numerical results using the introduced method are then compared with the measurement data.Keywords: 2D longitudinal FEM model, tunneling case study, stress relaxation, shield tunneling excavation
Procedia PDF Downloads 32833418 Surface Morphology and Wetting Behavior of the Aspidiotus spp. Scale Covers
Authors: Meril Kate Mariano, Billy Joel Almarinez Divina Amalin, Jose Isagani Janairo
Abstract:
The scale insects Aspidiotus destructor and Aspidiotus rigidus exhibit notable scale covers made of wax which provides protection against water loss and is capable to resist wetting, thus making them a desirable model for biomimetic designs. Their waxy covers enable them to infest mainly leaves of coconut trees despite the harsh wind and rain. This study aims to describe and compare the micro morphological characters on the surfaces of their scale covers consequently, how these micro structures affect their wetting properties. Scanning electron microscope was used for the surface characterization while an optical contact angle meter was employed in the wetting measurement. The scale cover of A. destructor is composed of multiple overlapping layers of wax that is arranged regularly while that of A. rigidus is composed of a uniform layer of wax with much more prominent wax ribbons irregularly arranged compared to the former. The protrusions found on the two organisms are formed by the wax ribbons that differ in arrangement with their height being A. destructor (3.57+1.29) < A. rigidus (4.23+1.22) and their density A. destructor (15+2.94) < A. rigidus (18.33+2.64). These morphological measurements could affect the contact angle (CA θ) measurement of A. destructor (102.66+9.78°) < A. rigidus (102.77 + 11.01°) wherein the assessment that the interaction of the liquid to the microstructures of the substrate is a large factor in the wetting properties of the insect scales is realized. The calculated surface free energy of A. destructor (38.47 mJ/m²) > A. rigidus (31.02 mJ/m²) shows inverse proportionality with the CA measurement. The dispersive interaction between the surface and liquid is more prevalent compared to the polar interaction for both Aspidiotus species, which was observed using the Fowkes method. The results of this study have possible applications to be a potential biomimetic design for various industries such as textiles and coatings.Keywords: Aspidiotus spp., biomimetics, contact angle, surface characterization, wetting behavior
Procedia PDF Downloads 12133417 Influence of Selected Finishing Technologies on the Roughness Parameters of Stainless Steel Manufactured by Selective Laser Melting Method
Authors: J. Hajnys, M. Pagac, J. Petru, P. Stefek, J. Mesicek, J. Kratochvil
Abstract:
The new progressive method of 3D metal printing SLM (Selective Laser Melting) is increasingly expanded into the normal operation. As a result, greater demands are placed on the surface quality of the parts produced in this way. The article deals with research of selected finishing methods (tumbling, face milling, sandblasting, shot peening and brushing) and their impact on the final surface roughness. The 20 x 20 x 7 mm produced specimens using SLM additive technology on the Renishaw AM400 were subjected to testing of these finishing methods by adjusting various parameters. Surface parameters of roughness Sa, Sz were chosen as the evaluation criteria and profile parameters Ra, Rz were used as additional measurements. Optical measurement of surface roughness was performed on Alicona Infinite Focus 5. An experiment conducted to optimize the surface roughness revealed, as expected, that the best roughness parameters were achieved through a face milling operation. Tumbling is particularly suitable for 3D printing components, as tumbling media are able to reach even complex shapes and, after changing to polishing bodies, achieve a high surface gloss. Surface quality after tumbling depends on the process time. Other methods with satisfactory results are shot peening and tumbling, which should be the focus of further research.Keywords: additive manufacturing, selective laser melting, SLM, surface roughness, stainless steel
Procedia PDF Downloads 13033416 A Study on the Accelerated Life Cycle Test Method of the Motor for Home Appliances by Using Acceleration Factor
Authors: Youn-Sung Kim, Mi-Sung Kim, Jae-Kun Lee
Abstract:
This paper deals with the accelerated life cycle test method of the motor for home appliances that demand high reliability. Life Cycle of parts in home appliances also should be 10 years because life cycle of the home appliances such as washing machine, refrigerator, TV is at least 10 years. In case of washing machine, the life cycle test method of motor is advanced for 3000 cycle test (1cycle = 2hours). However, 3000 cycle test incurs loss for the time and cost. Objectives of this study are to reduce the life cycle test time and the number of test samples, which could be realized by using acceleration factor for the test time and reduction factor for the number of sample.Keywords: accelerated life cycle test, motor reliability test, motor for washing machine, BLDC motor
Procedia PDF Downloads 633