Search results for: noise estimation
1382 Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation
Authors: Salim Çalışkan, Hakan Akyüz
Abstract:
Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset.Keywords: digital image correlation, speckle pattern, experimental mechanics, tensile test, aluminum alloy
Procedia PDF Downloads 741381 Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal
Authors: Belayneh Matebie, Michael Melese
Abstract:
The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.Keywords: Teff, ensemble learning, max-voting, CNN, SVM, RF
Procedia PDF Downloads 541380 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 731379 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 1721378 Rural to Urban Migration and Mental Health Consequences in Urbanizing China
Authors: Jie Li, Nick Manning
Abstract:
The mass rural-urban migrants in China associated with the urbanization processes bear significant implications on public health, which is an important yet under-researched area. Urban social and built environment, such as noise, air pollution, high population density, and social segregation, has the potential to contribute to mental illness. In China, rural-urban migrants are also faced with institutional discrimination tied to the hukou (household registration) system, through which they are denied of full citizenship to basic social welfare and services, which may elevate the stress of urban living and exacerbate the risks to mental illness. This paper aims to link the sociospatial exclusion, everyday life experiences and its mental health consequences on rural to urban migrants living in the mega-city of Shanghai. More specifically, it asks what the daily experience of being a migrant in Shanghai is actually like, particularly regarding sources of stress from housing, displacement, service accessibility, and cultural conflict, and whether these stresses affect mental health? Secondary data from literature review on migration, urban studies, and epidemiology research, as well as primary data from preliminary field trip observations and interviews are used in the analysis.Keywords: migration, urbanisation, mental health, China
Procedia PDF Downloads 3721377 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 2191376 Determining the Effectiveness of Radiation Shielding and Safe Time for Radiation Worker by Employing Monitoring of Accumulation Dose in the Operator Room of CT Scan
Authors: Risalatul Latifah, Bunawas Bunawas, Lailatul Muqmiroh, Anggraini D. Sensusiati
Abstract:
Along with the increasing frequency of the use of CT-Scan for radiodiagnostics purposes, it is necessary to study radiation protection. This study examined aspects of radiation protection of workers. This study tried using thermoluminescent dosimeter (TLD) for evaluating radiation shielding and estimating safe time for workers during CT Scan examination. Six TLDs were placed on door, wall, and window inside and outside of the CT Scan room for 1 month. By using TLD monitoring, it could be seen how much radiation was exposed in the operator room. The results showed the effective dose at door, window, and wall was respectively 0.04 mSv, 0.05 mSv, and 0.04 mSv. With these values, it could be evaluated the effectiveness of radiation shielding on doors, glass and walls were respectively 90.6%, 95.5%, and 92.2%. By applying the dose constraint and the estimation of the accumulated dose for one month, radiation workers were still safe to perform the irradiation for 180 patients.Keywords: CT scan room, TLD, radiation worker, dose constraint
Procedia PDF Downloads 2881375 Estimation of Tensile Strength for Granitic Rocks by Using Discrete Element Approach
Authors: Aliakbar Golshani, Armin Ramezanzad
Abstract:
Tensile strength which is an important parameter of the rock for engineering applications is difficult to measure directly through physical experiment (i.e. uniaxial tensile test). Therefore, indirect experimental methods such as Brazilian test have been taken into consideration and some relations have been proposed in order to obtain the tensile strength for rocks indirectly. In this research, to calculate numerically the tensile strength for granitic rocks, Particle Flow Code in three-dimension (PFC3D) software were used. First, uniaxial compression tests were simulated and the tensile strength was determined for Inada granite (from a quarry in Kasama, Ibaraki, Japan). Then, by simulating Brazilian test condition for Inada granite, the tensile strength was indirectly calculated again. Results show that the tensile strength calculated numerically agrees well with the experimental results obtained from uniaxial tensile tests on Inada granite samples.Keywords: numerical simulation, particle flow code, PFC, tensile strength, Brazilian Test
Procedia PDF Downloads 1911374 Sub-Pixel Level Classification Using Remote Sensing For Arecanut Crop
Authors: S. Athiralakshmi, B.E. Bhojaraja, U. Pruthviraj
Abstract:
In agriculture, remote sensing is applied for monitoring of plant development, evaluating of physiological processes and growth conditions. Especially valuable are the spatio-temporal aspects of the remotely sensed data in detecting crop state differences and stress situations. In this study, hyperion imagery is used for classifying arecanut crops based on their age so that these maps can be used in yield estimation of crops, irrigation purposes, applying fertilizers etc. Traditional hard classifiers assigns the mixed pixels to the dominant classes. The proposed method uses a sub pixel level classifier called linear spectral unmixing available in ENVI software. It provides the relative abundance of surface materials and the context within a pixel that may be a potential solution to effectively identifying the land-cover distribution. Validation is done referring to field spectra collected using spectroradiometer and the ground control points obtained from GPS.Keywords: FLAASH, Hyperspectral remote sensing, Linear Spectral Unmixing, Spectral Angle Mapper Classifier.
Procedia PDF Downloads 5191373 An Integrated DANP-PROMETHEE II Approach for Air Traffic Controllers’ Workload Stress Problem
Authors: Jennifer Loar, Jason Montefalcon, Kissy Mae Alimpangog, Miriam Bongo
Abstract:
The demanding, professional roles that air traffic controllers (ATC) play in air transport operation provided the main motivation of this paper. As the controllers’ workload stress becomes more complex due to various stressors, the challenge to overcome these in the pursuit of improving the efficiency of controllers and safety level of aircrafts has been relevant. Therefore, in order to determine the main stressors and surface the best alternative, two widely-known multi-criteria decision-making (MCDM) methods, DANP and PROMETHEE II, are applied. The proposed method is demonstrated in a case study at Mactan Civil Aviation Authority of the Philippines (CAAP). The results showed that the main stressors are high air traffic volume, extraneous traffic, unforeseen events, limitations and reliability of equipment, noise/distracter, micro climate, bad posture, relations with supervisors and colleagues, private life conditions/relationships, and emotional conditions. In the outranking of alternatives, compartmentalization is believed to be the most preferred alternative to overcome controllers’ workload stress. This implies that compartmentalization can best be applied to reduce controller workload stress.Keywords: air traffic controller, DANP, MCDM, PROMETHEE II, workload stress
Procedia PDF Downloads 2701372 Research and Application of the Three-Dimensional Visualization Geological Modeling of Mine
Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three dimensional visualization geological modeling of mine is the digital characterization of mineral deposit, and is one of the key technology of digital mine. The three-dimensional geological modeling is a technology that combines the geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in three-dimensional environment with computer technology, and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provided scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 701371 Motion-Based Detection and Tracking of Multiple Pedestrians
Authors: A. Harras, A. Tsuji, K. Terada
Abstract:
Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%.Keywords: automatic detection, tracking, pedestrians, counting
Procedia PDF Downloads 2571370 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.Keywords: regression, piecewise, Bayesian, reversible Jump MCMC
Procedia PDF Downloads 5211369 Sperm Flagellum Center-Line Tracing in 4D Stacks Using an Iterative Minimal Path Method
Authors: Paul Hernandez-Herrera, Fernando Montoya, Juan Manuel Rendon, Alberto Darszon, Gabriel Corkidi
Abstract:
Intracellular calcium ([Ca2+]i) regulates sperm motility. The analysis of [Ca2+]i has been traditionally achieved in two dimensions while the real movement of the cell takes place in three spatial dimensions. Due to optical limitations (high speed cell movement and low light emission) important data concerning the three dimensional movement of these flagellated cells had been neglected. Visualizing [Ca2+]i in 3D is not a simple matter since it requires complex fluorescence microscopy techniques where the resulting images have very low intensity and consequently low SNR (Signal to Noise Ratio). In 4D sequences, this problem is magnified since the flagellum oscillates (for human sperm) at least at an average frequency of 15 Hz. In this paper, a novel approach to extract the flagellum’s center-line in 4D stacks is presented. For this purpose, an iterative algorithm based on the fast-marching method is proposed to extract the flagellum’s center-line. Quantitative and qualitative results are presented in a 4D stack to demonstrate the ability of the proposed algorithm to trace the flagellum’s center-line. The method reached a precision and recall of 0.96 as compared with a semi-manual method.Keywords: flagellum, minimal path, segmentation, sperm
Procedia PDF Downloads 2841368 Digitalization in Aggregate Quarries
Authors: José Eugenio Ortiz, Pierre Plaza, Josefa Herrero, Iván Cabria, José Luis Blanco, Javier Gavilanes, José Ignacio Escavy, Ignacio López-Cilla, Virginia Yagüe, César Pérez, Silvia Rodríguez, Jorge Rico, Cecilia Serrano, Jesús Bernat
Abstract:
The development of Artificial Intelligence services in mining processes, specifically in aggregate quarries, is facilitating automation and improving numerous aspects of operations. Ultimately, AI is transforming the mining industry by improving efficiency, safety and sustainability. With the ability to analyze large amounts of data and make autonomous decisions, AI offers great opportunities to optimize mining operations and maximize the economic and social benefits of this vital industry. Within the framework of the European DIGIECOQUARRY project, various services were developed for the identification of material quality, production estimation, detection of anomalies and prediction of consumption and production automatically with good results.Keywords: aggregates, artificial intelligence, automatization, mining operations
Procedia PDF Downloads 881367 Energy Analysis of Seasonal Air Conditioning Demand of All Income Classes Using Bottom up Model in Pakistan
Authors: Saba Arif, Anam Nadeem, Roman Kalvin, Tanzeel Rashid, Burhan Ali, Juntakan Taweekun
Abstract:
Currently, the energy crisis is taking serious attention. Globally, industries and building are major share takers of energy. 72% of total global energy is consumed by residential houses, markets, and commercial building. Additionally, in appliances air conditioners are major consumer of electricity; about 60% energy is used for cooling purpose in houses due to HVAC units. Energy demand will aid in determining what changes will be needed whether it is the estimation of the required energy for households or instituting conservation measures. Bottom-up model is one of the most famous methods for forecasting. In current research bottom-up model of air conditioners' energy consumption in all income classes in comparison with seasonal variation and hourly consumption is calculated. By comparison of energy consumption of all income classes by usage of air conditioners, total consumption of actual demand and current availability can be seen.Keywords: air conditioning, bottom up model, income classes, energy demand
Procedia PDF Downloads 2501366 Perspectives of Renewable Energy in 21st Century in India: Statistics and Estimation
Authors: Manoj Kumar, Rajesh Kumar
Abstract:
With the favourable geographical conditions at Indian-subcontinent, it is suitable for flourishing renewable energy. Increasing amount of dependence on coal and other conventional sources is driving the world into pollution and depletion of resources. This paper presents the statistics of energy consumption and energy generation in Indian Sub-continent, which notifies us with the increasing energy demands surpassing energy generation. With the aggrandizement in demand for energy, usage of coal has increased, since the major portion of energy production in India is from thermal power plants. The increase in usage of thermal power plants causes pollution and depletion of reserves; hence, a paradigm shift to renewable sources is inevitable. In this work, the capacity and potential of renewable sources in India are analyzed. Based on the analysis of this work, future potential of these sources is estimated.Keywords: depletion of reserves, energy consumption and generation, emmissions, global warming, renewable sources
Procedia PDF Downloads 4321365 Shaking Force Balancing of Mechanisms: An Overview
Authors: Vigen Arakelian
Abstract:
The balancing of mechanisms is a well-known problem in the field of mechanical engineering because the variable dynamic loads cause vibrations, as well as noise, wear and fatigue of the machines. A mechanical system with unbalance shaking force and shaking moment transmits substantial vibration to the frame. Therefore, the objective of the balancing is to cancel or reduce the variable dynamic reactions transmitted to the frame. The resolution of this problem consists in the balancing of the shaking force and shaking moment. It can be fully or partially, by internal mass redistribution via adding counterweights or by modification of the mechanism's architecture via adding auxiliary structures. The balancing problems are of continue interest to researchers. Several laboratories around the world are very active in this area and new results are published regularly. However, despite its ancient history, mechanism balancing theory continues to be developed and new approaches and solutions are constantly being reported. Various surveys have been published that disclose particularities of balancing methods. The author believes that this is an appropriate moment to present a state of the art of the shaking force balancing studies completed by new research results. This paper presents an overview of methods devoted to the shaking force balancing of mechanisms, as well as the historical aspects of the origins and the evolution of the balancing theory of mechanisms.Keywords: inertial forces, shaking forces, balancing, dynamics, mechanism design
Procedia PDF Downloads 1271364 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems
Authors: Alexander Norbach
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver
Procedia PDF Downloads 1311363 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.Keywords: conduction, inverse problems, conjugated gradient method, laser
Procedia PDF Downloads 3691362 Estimation of Seismic Deformation Demands of Tall Buildings with Symmetric Setbacks
Authors: Amir Alirezaei, Shahram Vahdani
Abstract:
This study estimates the seismic demands of tall buildings with central symmetric setbacks by using nonlinear time history analysis. Three setback structures, all 60-story high with setback in three levels, are used for evaluation. The effects of irregularities occurred by setback, are evaluated by determination of global-drift, story-displacement and story drift. Story-displacement is modified by roof displacement and first story displacement and story drift is modified by global drift. All results are calculated at the center of mass and in x and y direction. Also the absolute values of these quantities are determined. The results show that increasing of vertical irregularities increases the global drift of the structure and enlarges the deformations in the height of the structure. It is also observed that the effects of geometry irregularity in the seismic deformations of setback structures are higher than those of mass irregularity.Keywords: deformation demand, drift, setback, tall building
Procedia PDF Downloads 4241361 Prediction of Mechanical Strength of Multiscale Hybrid Reinforced Cementitious Composite
Authors: Salam Alrekabi, A. B. Cundy, Mohammed Haloob Al-Majidi
Abstract:
Novel multiscale hybrid reinforced cementitious composites based on carbon nanotubes (MHRCC-CNT), and carbon nanofibers (MHRCC-CNF) are new types of cement-based material fabricated with micro steel fibers and nanofilaments, featuring superior strain hardening, ductility, and energy absorption. This study focused on established models to predict the compressive strength, and direct and splitting tensile strengths of the produced cementitious composites. The analysis was carried out based on the experimental data presented by the previous author’s study, regression analysis, and the established models that available in the literature. The obtained models showed small differences in the predictions and target values with experimental verification indicated that the estimation of the mechanical properties could be achieved with good accuracy.Keywords: multiscale hybrid reinforced cementitious composites, carbon nanotubes, carbon nanofibers, mechanical strength prediction
Procedia PDF Downloads 1611360 Ex-Post Export Data for Differentiated Products Revealing the Existence of Productcycles
Authors: Ranajoy Bhattcharyya
Abstract:
We estimate international product cycles as shifting product spaces by using 1976 to 2010 UN Comtrade data on all differentiated tradable products in all countries. We use a product space approach to identify the representative product baskets of high-, middle and low-income countries and then use these baskets to identify the patterns of change in comparative advantage of countries over time. We find evidence of a product cycle in two senses: First, high-, middle- and low-income countries differ in comparative advantage, and high-income products migrate to the middle-income basket. A similar pattern is observed for middle- and low-income countries. Our estimation of the lag shows that middle-income countries tend to quickly take up the products of high-income countries, but low-income countries take a longer time absorbing these products. Thus, the gap between low- and middle-income countries is considerably higher than that between middle- and high-income nations.Keywords: product cycle, comparative advantage, representative product basket, ex-post data
Procedia PDF Downloads 4201359 Digital Transformation, Financing Microstructures, and Impact on Well-Being and Income Inequality
Authors: Koffi Sodokin
Abstract:
Financing microstructures are increasingly seen as a means of financial inclusion and improving overall well-being in developing countries. In practice, digital transformation in finance can accelerate the optimal functioning of financing microstructures, such as access by households to microfinance and microinsurance. Large households' access to finance can lead to a reduction in income inequality and an overall improvement in well-being. This paper explores the impact of access to digital finance and financing microstructures on household well-being and the reduction of income inequality. To this end, we use the propensity score matching, the double difference, and the smooth instrumental quantile regression as estimation methods with two periods of survey data. The paper uses the FinScope consumer data (2016) and the Harmonized Living Standards Measurement Study (2018) from Togo in a comparative perspective. The results indicate that access to digital finance, as a cultural game changer, and to financing microstructures improves overall household well-being and contributes significantly to reducing income inequality.Keywords: financing microstructure, microinsurance, microfinance, digital finance, well-being, income inequality
Procedia PDF Downloads 891358 Urea Amperometric Biosensor Based on Entrapment Immobilization of Urease onto a Nanostructured Polypyrrol and Multi-Walled Carbon Nanotube
Authors: Hamide Amani, Afshin FarahBakhsh, Iman Farahbakhsh
Abstract:
In this paper, an amprometric biosensor based on surface modified polypyrrole (PPy) has been developed for the quantitative estimation of urea in aqueous solutions. The incorporation of urease (Urs) into a bipolymeric substrate consisting of PPy was performed by entrapment to the polymeric matrix, PPy acts as amperometric transducer in these biosensors. To increase the membrane conductivity, multi-walled carbon nanotubes (MWCNT) were added to the PPy solution. The entrapped MWCNT in PPy film and the bipolymer layers were prepared for construction of Pt/PPy/MWCNT/Urs. Two different configurations of working electrodes were evaluated to investigate the potential use of the modified membranes in biosensors. The evaluation of two different configurations of working electrodes suggested that the second configuration, which was composed of an electrode-mediator-(pyrrole and multi-walled carbon nanotube) structure and enzyme, is the best candidate for biosensor applications.Keywords: urea biosensor, polypyrrole, multi-walled carbon nanotube, urease
Procedia PDF Downloads 3291357 Reducing Weight and Fuel Consumption of Civil Aircraft by EML
Authors: Luca Bertola, Tom Cox, Pat Wheeler, Seamus Garvey, Herve Morvan
Abstract:
Electromagnetic launch systems have been proposed for military applications to accelerate jet planes on aircraft carriers. This paper proposes the implementation of similar technology to aid civil aircraft take-off, which can provide significant economic, environmental and technical benefits. Assisted launch has the potential of reducing ground noise and emissions near airports and improving overall aircraft efficiency through reducing engine thrust requirements. This paper presents a take-off performance analysis for an Airbus A320-200 taking off with and without the assistance of the electromagnetic catapult. Assisted take-off allows for a significant reduction in take-off field length, giving more capacity with existing airport footprints and reducing the necessary footprint of new airports, which will both reduce costs and increase the number of suitable sites. The electromagnetic catapult may allow the installation of smaller engines with lower rated thrust. The consequent fuel consumption and operational cost reduction are estimated. The potential of reducing the aircraft operational costs and the runway length required making electromagnetic launch system an attractive solution to the air traffic growth in busy airports.Keywords: electromagnetic launch, fuel consumption, take-off analysis, weight reduction
Procedia PDF Downloads 3341356 Vector Quantization Based on Vector Difference Scheme for Image Enhancement
Authors: Biji Jacob
Abstract:
Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.Keywords: codebook, image enhancement, vector difference, vector quantization
Procedia PDF Downloads 2671355 Flow Control Optimisation Using Vortex Generators in Turbine Blade
Authors: J. Karthik, G. Vinayagamurthy
Abstract:
Aerodynamic flow control is achieved by interaction of flowing medium with corresponding structure so that its natural flow state is disturbed to delay the transition point. This paper explains the aerodynamic effect and optimized design of Vortex Generators on the turbine blade to achieve maximum flow control. The airfoil is chosen from NREL [National Renewable Energy Laboratory] S-series airfoil as they are characterized with good lift characteristics and lower noise. Vortex generators typically chosen are Ogival, Rectangular, Triangular and Tapered Fin shapes attached near leading edge. Vortex generators are typically distributed from the primary to tip of the blade section. The design wind speed is taken as 6m/s and the computational analysis is executed. The blade surface is simulated using k- ɛ SST model and results are compared with X-FOIL results. The computational results are validated using Wind Tunnel Testing of the blade corresponding to the design speed. The effect of Vortex generators on the flow characteristics is studied from the results of analysis. By comparing the computational and test results of all shapes of Vortex generators; the optimized design is achieved for effective flow control corresponding to the blade.Keywords: flow control, vortex generators, design optimisation, CFD
Procedia PDF Downloads 4081354 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.Keywords: flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis
Procedia PDF Downloads 1771353 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation
Procedia PDF Downloads 261