Search results for: delays resulting from two separate causes at the same time
19173 Development of Long and Short Range Ordered Domains in a High Specific Strength Steel
Authors: Nikhil Kumar, Aparna Singh
Abstract:
Microstructural development when annealed at different temperatures in a high aluminum and manganese light weight steel has been examined. The FCC matrix of the manganese (Mn)-rich and nickel (Ni)-rich areas in the studied Fe-Mn-Al-Ni-C-light weight steel have been found to contain anti phase domains. In the Mn-rich region short order range of domains manifested by the diffuse scattering in the electron diffraction patterns was observed. Domains in the Ni-rich region were found to be arranged periodically validated through lattice imaging. The nature of these domains can be tuned with annealing temperature resulting in profound influence in the mechanical properties.Keywords: Anti-phase domain boundaries, BCC, FCC, Light Weight Steel
Procedia PDF Downloads 14119172 On Musical Information Geometry with Applications to Sonified Image Analysis
Authors: Shannon Steinmetz, Ellen Gethner
Abstract:
In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content.Keywords: sonification, musical information geometry, image, content extraction, automated quantification, audio segmentation, pattern recognition
Procedia PDF Downloads 23819171 Spline Solution of Singularly Perturbed Boundary Value Problems
Authors: Reza Mohammadi
Abstract:
Using quartic spline, we develop a method for numerical solution of singularly perturbed two-point boundary-value problems. The purposed method is fourth-order accurate and applicable to problems both in singular and non-singular cases. The convergence analysis of the method is given. The resulting linear system of equations has been solved by using a tri-diagonal solver. We applied the presented method to test problems which have been solved by other existing methods in references, for comparison of presented method with the existing methods. Numerical results are given to illustrate the efficiency of our methods.Keywords: second-order ordinary differential equation, singularly-perturbed, quartic spline, convergence analysis
Procedia PDF Downloads 29519170 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling
Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar
Abstract:
Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.Keywords: toolpath, part program, optimization, pocket
Procedia PDF Downloads 28819169 Development of the Maturity Sensor Prototype and Method of Its Placement in the Structure
Authors: Yelbek B. Utepov, Assel S. Tulebekova, Alizhan B. Kazkeyev
Abstract:
Maturity sensors are used to determine concrete strength by the non-destructive method. The method of placement of the maturity sensors determines their number required for a certain frame of a monolithic building. Previous studies weakly describe this aspect, giving only logical assumptions. This paper proposes a cheap prototype of an embedded wireless sensor for monitoring concrete structures, as well as an alternative strategy for placing sensors based on the transitional boundaries of the temperature distribution of concrete curing, which were determined by building a heat map of the temperature distribution, where unknown values are calculated by the method of inverse distance weighing. The developed prototype can simultaneously measure temperature and relative humidity over a smartphone-controlled time interval. It implements a maturity method to assess the in-situ strength of concrete, which is considered an alternative to the traditional shock impulse and compression testing method used in Kazakhstan. The prototype was tested in laboratory and field conditions. The tests were aimed at studying the effect of internal and external temperature and relative humidity on concrete's strength gain. Based on an experimentally poured concrete slab with randomly integrated maturity sensors, it was determined that the transition boundaries form elliptical forms. Temperature distribution over the largest diameter of the ellipses was plotted, resulting in correct and inverted parabolas. As a result, the distance between the closest opposite crossing points of the parabolas is accepted as the maximum permissible step for setting the maturity sensors. The proposed placement strategy can be applied to sensors that measure various continuous phenomena such as relative humidity. Prototype testing has also revealed Bluetooth inconvenience due to weak signal and inability to access multiple prototypes simultaneously. For this reason, further prototype upgrades are planned in future work.Keywords: heat map, placement strategy, temperature and relative humidity, wireless embedded sensor
Procedia PDF Downloads 17819168 Industrial Wastewater from Paper Mills Used for Biofuel Production and Soil Improvement
Authors: Karin M. Granstrom
Abstract:
Paper mills produce wastewater with a high content of organic substances. Treatment usually consists of sedimentation, biological treatment of activated sludge basins, and chemical precipitation. The resulting sludges are currently a waste problem, deposited in landfills or used as low-grade fuels for incineration. There is a growing awareness of the need for energy efficiency and environmentally sound management of sludge. A resource-efficient method would be to digest the wastewater sludges anaerobically to produce biogas, refine the biogas to biomethane for use in the transportation sector, and utilize the resulting digestate for soil improvement. The biomethane yield of pulp and paper wastewater sludge is comparable to that of straw or manure. As a bonus, the digestate has an improved dewaterability compared to the feedstock biosludge. Limitations of this process are predominantly a weak economic viability - necessitating both sufficiently large-scale paper production for the necessary large amounts of produced wastewater sludge, and the resolving of remaining questions on the certifiability of the digestate and thus its sales price. A way to improve the practical and economical feasibility of using paper mill wastewater for biomethane production and soil improvement is to co-digest it with other feedstocks. In this study, pulp and paper sludge were co-digested with (1) silage and manure, (2) municipal sewage sludge, (3) food waste, or (4) microalgae. Biomethane yield analysis was performed in 500 ml batch reactors, using an Automatic Methane Potential Test System at thermophilic temperature, with a 20 days test duration. The results show that (1) the harvesting season of grass silage and manure collection was an important factor for methane production, with spring feedstocks producing much more than autumn feedstock, and pulp mill sludge benefitting the most from co-digestion; (2) pulp and paper mill sludge is a suitable co-substrate to add when a high nitrogen content cause impaired biogas production due to ammonia inhibition; (3) the combination of food waste and paper sludge gave higher methane yield than either of the substrates digested separately; (4) pure microalgae gave the highest methane yield. In conclusion, although pulp and paper mills are an almost untapped resource for biomethane production, their wastewater is a suitable feedstock for such a process. Furthermore, through co-digestion, the pulp and paper mill wastewater and mill sludges can aid biogas production from more nutrient-rich waste streams from other industries. Such co-digestion also enhances the soil improvement properties of the residue digestate.Keywords: anaerobic, biogas, biomethane, paper, sludge, soil
Procedia PDF Downloads 25919167 Assessing of Social Comfort of the Russian Population with Big Data
Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro
Abstract:
The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.Keywords: big data, Google trends, integral indicator, social comfort
Procedia PDF Downloads 20019166 Analysis of Brain Signals Using Neural Networks Optimized by Co-Evolution Algorithms
Authors: Zahra Abdolkarimi, Naser Zourikalatehsamad,
Abstract:
Up to 40 years ago, after recognition of epilepsy, it was generally believed that these attacks occurred randomly and suddenly. However, thanks to the advance of mathematics and engineering, such attacks can be predicted within a few minutes or hours. In this way, various algorithms for long-term prediction of the time and frequency of the first attack are presented. In this paper, by considering the nonlinear nature of brain signals and dynamic recorded brain signals, ANFIS model is presented to predict the brain signals, since according to physiologic structure of the onset of attacks, more complex neural structures can better model the signal during attacks. Contribution of this work is the co-evolution algorithm for optimization of ANFIS network parameters. Our objective is to predict brain signals based on time series obtained from brain signals of the people suffering from epilepsy using ANFIS. Results reveal that compared to other methods, this method has less sensitivity to uncertainties such as presence of noise and interruption in recorded signals of the brain as well as more accuracy. Long-term prediction capacity of the model illustrates the usage of planted systems for warning medication and preventing brain signals.Keywords: co-evolution algorithms, brain signals, time series, neural networks, ANFIS model, physiologic structure, time prediction, epilepsy suffering, illustrates model
Procedia PDF Downloads 28319165 FreGsd: A Framework for Golbal Software Requirement Engineering
Authors: Alsahli Abdulaziz Abdullah, Hameed Ullah Khan
Abstract:
Software development nowadays is more and more using global ways of development instead of normal development enviroment where development occur in one location. This paper is a aimed to propose a Requirement Engineering framework to support Global Software Development environment with regards to all requirment engineering activities from elicitation to fially magning requirment change. Global software enviroment is more and more gaining better reputation in software developmet with better quality is resulting from developing in this eviroment yet with lower cost.However, failure rate developing in this enviroment is high due to inapproprate requirment development and managment.This paper will add to the software engineering development envrioments discipline and many developers in GSD will benefit from it.Keywords: global software development environment, GSD, requirement engineering, FreGsd, computer engineering
Procedia PDF Downloads 54919164 Dynamic Analysis of Submerged Floating Tunnel Subjected to Hydrodynamic and Seismic Loadings
Authors: Naik Muhammad, Zahid Ullah, Dong-Ho Choi
Abstract:
Submerged floating tunnel (SFT) is a new solution for the transportation infrastructure through sea straits, fjords, and inland waters, and can be a good alternative to long span suspension bridges. SFT is a massive cylindrical structure that floats at a certain depth below the water surface and subjected to extreme environmental conditions. The identification of dominant structural response of SFT becomes more important due to intended environmental conditions for the design of SFT. The time domain dynamic problem of SFT moored by vertical and inclined mooring cables/anchors is formulated. The dynamic time history analysis of SFT subjected to hydrodynamic and seismic excitations is performed. The SFT is modeled by finite element 3D beam, and the mooring cables are modeled by truss elements. Based on the dynamic time history analysis the displacements and internal forces of SFT were calculated. The response of SFT is presented for hydrodynamic and seismic excitations. The transverse internal forces of SFT were the maximum compared to vertical direction, for both hydrodynamic and seismic cases; this indicates that the cable system provides very small stiffness in transverse direction as compared to vertical direction of SFT.Keywords: submerged floating tunnel, hydrodynamic analysis, time history analysis, seismic response
Procedia PDF Downloads 32919163 Kalman Filter Gain Elimination in Linear Estimation
Authors: Nicholas D. Assimakis
Abstract:
In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.Keywords: discrete time, estimation, Kalman filter, Kalman filter gain
Procedia PDF Downloads 19619162 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker
Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee
Abstract:
The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.Keywords: heat transfer, temperature, voice coil, woofer speaker
Procedia PDF Downloads 36019161 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection
Procedia PDF Downloads 13519160 A Heart Arrhythmia Prediction Using Machine Learning’s Classification Approach and the Concept of Data Mining
Authors: Roshani S. Golhar, Neerajkumar S. Sathawane, Snehal Dongre
Abstract:
Background and objectives: As the, cardiovascular illnesses increasing and becoming cause of mortality worldwide, killing around lot of people each year. Arrhythmia is a type of cardiac illness characterized by a change in the linearity of the heartbeat. The goal of this study is to develop novel deep learning algorithms for successfully interpreting arrhythmia using a single second segment. Because the ECG signal indicates unique electrical heart activity across time, considerable changes between time intervals are detected. Such variances, as well as the limited number of learning data available for each arrhythmia, make standard learning methods difficult, and so impede its exaggeration. Conclusions: The proposed method was able to outperform several state-of-the-art methods. Also proposed technique is an effective and convenient approach to deep learning for heartbeat interpretation, that could be probably used in real-time healthcare monitoring systemsKeywords: electrocardiogram, ECG classification, neural networks, convolutional neural networks, portable document format
Procedia PDF Downloads 6919159 Interbrain Synchronization and Multilayer Hyper brain Networks when Playing Guitar in Quartet
Authors: Viktor Müller, Ulman Lindenberger
Abstract:
Neurophysiological evidence suggests that the physiological states of the system are characterized by specific network structures and network topology dynamics, demonstrating a robust interplay between network topology and function. It is also evident that interpersonal action coordination or social interaction (e.g., playing music in duets or groups) requires strong intra- and interbrain synchronization resulting in a specific hyper brain network activity across two or more brains to support such coordination or interaction. Such complex hyper brain networks can be described as multiplex or multilayer networks that have a specific multidimensional or multilayer network organization characteristic for superordinate systems and their constituents. The aim of the study was to describe multilayer hyper brain networks and synchronization patterns of guitarists playing guitar in a quartet by using electroencephalography (EEG) hyper scanning (simultaneous EEG recording from multiple brains) and following time-frequency decomposition and multilayer network construction, where within-frequency coupling (WFC) represents communication within different layers, and cross-frequency coupling (CFC) depicts communication between these layers. Results indicate that communication or coupling dynamics, both within and between the layers across the brains of the guitarists, play an essential role in action coordination and are particularly enhanced during periods of high demands on musical coordination. Moreover, multilayer hyper brain network topology and dynamical structure of guitar sounds showed specific guitar-guitar, brain-brain, and guitar-brain causal associations, indicating multilevel dynamics with upward and downward causation, contributing to the superordinate system dynamics and hyper brain functioning. It is concluded that the neuronal dynamics during interpersonal interaction are brain-wide and frequency-specific with the fine-tuned balance between WFC and CFC and can best be described in terms of multilayer multi-brain networks with specific network topology and connectivity strengths. Further sophisticated research is needed to deepen our understanding of these highly interesting and complex phenomena.Keywords: EEG hyper scanning, intra- and interbrain coupling, multilayer hyper brain networks, social interaction, within- and cross-frequency coupling
Procedia PDF Downloads 7219158 In vitro Effects of Salvia officinalis on Bovine Spermatozoa
Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč
Abstract:
In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.Keywords: bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa
Procedia PDF Downloads 33819157 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22419156 Verification of Sr-90 Determination in Water and Spruce Needles Samples Using IAEA-TEL-2016-04 ALMERA Proficiency Test Samples
Authors: S. Visetpotjanakit, N. Nakkaew
Abstract:
Determination of 90Sr in environmental samples has been widely developed with several radioanlytical methods and radiation measurement techniques since 90Sr is one of the most hazardous radionuclides produced from nuclear reactors. Liquid extraction technique using di-(2-ethylhexyl) phosphoric acid (HDEHP) to separate and purify 90Y and Cherenkov counting using liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed and performed at our institute, the Office of Atoms for Peace. The approach is inexpensive, non-laborious, and fast to analyse 90Sr in environmental samples. To validate our analytical performance for the accurate and precise criteria, determination of 90Sr using the IAEA-TEL-2016-04 ALMERA proficiency test samples were performed for statistical evaluation. The experiment used two spiked tap water samples and one naturally contaminated spruce needles sample from Austria collected shortly after the Chernobyl accident. Results showed that all three analyses were successfully passed in terms of both accuracy and precision criteria, obtaining “Accepted” statuses. The two water samples obtained the measured results of 15.54 Bq/kg and 19.76 Bq/kg, which had relative bias 5.68% and -3.63% for the Maximum Acceptable Relative Bias (MARB) 15% and 20%, respectively. And the spruce needles sample obtained the measured results of 21.04 Bq/kg, which had relative bias 23.78% for the MARB 30%. These results confirm our analytical performance of 90Sr determination in water and spruce needles samples using the same developed method.Keywords: ALMERA proficiency test, Cerenkov counting, determination of 90Sr, environmental samples
Procedia PDF Downloads 23219155 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 20519154 Study of the Removal Efficiency of Azo-Dyes Using Xanthan as Sequestering Agent
Authors: Cedillo Ortiz Cesar Isaac, Marañón-Ruiz Virginia-Francisca, Lozano-Alvarez Juan Antonio, Jáuregui-Rincón Juan, Roger Chiu Zarate
Abstract:
Introduction: The contamination of water with the azo-dye is a problem worldwide as although wastewater contaminate is treated in a municipal sewage system, still contain a considerable amount of dyes. In the present, there are different processes denominated tertiary method in which it is possible to lower the concentration of the dye. One of these methods is by adsorption onto various materials which can be organic or inorganic materials. The xanthan is a biomaterial as removal agents to decrease the dye content in aqueous solution. The Zimm-Bragg model described the experimental isotherms obtained when this biopolymer was used in the removal of textile dyes. Nevertheless, it was not established if a possible correlation between dye structure and removal efficiency exists. In this sense, the principal objective of this report is to propose a qualitative relationship between the structure of three azo-dyes (Congo Red (CR), Methyl Red (MR) and Methyl Orange (MO)) and their removal efficiency from aqueous environment when xanthan are used as dye sequestering agents. Methods: The dyes were subjected to different pH and ionic strength values to obtain the conditions of maximum dye removal. Afterward, these conditions were used to perform the adsorption isotherm as was reported in the previous study in our group. The Zimm-Bragg model was used to describe the experimental data and the parameters of nucleation (Ku) and cooperativity (U) were obtained by optimization using the R statistical software. The spectra from UV-Visible (aqueous solution), Infrared absorption and Raman spectroscopies (dry samples) were obtained from the biopolymer-dye complex. Results: The removal percent with xanthan in each dye are as follows: with CR had 99.98 % when the pH is 12 and ionic strength is 10.12, with MR had 84.79 % when the pH is 9.5 and ionic strength is 43 and finally the MO had 30 % in pH 4 and 72. It can be seen that when xanthan is used to remove the dyes, exists a lower dependence between structure and removal efficiency. This may be due to the different tendency to form aggregates of each dye. This aggregation capacity and the charge of each dye resulting from the pH and ionic strength values of aqueous solutions are key factors in the dye removal. The experimental isotherm of MR was only that adequately described by Zimm-Bragg model. Because with the CR had the 100 % of remove thus is very difficult obtain de experimental isotherm and finally MO had results fluctuating and therefore was impossible get the accurate data. Conclusions: The study of the removal of three dyes with xanthan as dye sequestering agents suggests that aggregation capacity of dyes and the charge resulting from structural characteristics such as molecular weight and functional groups have a relationship with the removal efficiency. Acknowledgements: We are gratefully acknowledged support for this project by Consejo Nacional de Ciencia y Tecnología, México (CONACyT, Grant No. 632694.)Keywords: adsorption, azo dyes, xanthan gum, Zimm Bragg theory
Procedia PDF Downloads 28019153 Systems for Air Renewal Inside Bus Bodies Importance in the Prevention of Disease Transmission
Authors: Giovanni Matheus Rech, Gilberto Zan, Filipe P. Aguiar
Abstract:
The current pandemic scenario raises questions that many times would have previously gone unnoticed. One of these issues is the quality of the air we breathe in the most diverse environments in which we are inserted in an everyday. It is plausible to suppose that, at times like this, there is apprehension regarding the possibility of contamination by pathological agents such as viruses and bacterias through the airways. However, the renewal of indoor air, combined with a properly sanitized air conditioning system, are important tools for the prevention of viral diseases, as is the case with COVID-19. The bus is an example of an environment where renovation is applied to improve the quality of indoor air, helping to reduce the possibility of spreading pathological agents. Together with other care, such as an alcohol gel dispenser, curtains to separate the passengers, cleaning the environment more frequently, and mandatory use of masks, help to reduce the transmission of pathologies, such as COVID-19. Knowing the reality of a large part of the population regarding the need for public transport, there are standards and devices dedicated to promoting air quality, ensuring greater comfort and safety for users. This paper seeks to present such standards and recommendations to improve the quality of indoor air, as well as the equipment responsible for the renewal of the air in the body of a bus. Experimental measurement of the flow rates of the renewal devices present in the bus body allows quantifying the average volume of external air admitted in each type of body. This way, it was possible to compare, in terms of airflow per person, the values of a bus in relation to a series of other environments, using recommendations for air renewal are described through the Brazilian standard ABNT NBR 16401.Keywords: air quality, air renewal, buses, Covid-19
Procedia PDF Downloads 15119152 Some Integral Inequalities of Hermite-Hadamard Type on Time Scale and Their Applications
Authors: Artion Kashuri, Rozana Liko
Abstract:
In this paper, the authors establish an integral identity using delta differentiable functions. By applying this identity, some new results via a general class of convex functions with respect to two nonnegative functions on a time scale are given. Also, for suitable choices of nonnegative functions, some special cases are deduced. Finally, in order to illustrate the efficiency of our main results, some applications to special means are obtained as well. We hope that current work using our idea and technique will attract the attention of researchers working in mathematical analysis, mathematical inequalities, numerical analysis, special functions, fractional calculus, quantum mechanics, quantum calculus, physics, probability and statistics, differential and difference equations, optimization theory, and other related fields in pure and applied sciences.Keywords: convex functions, Hermite-Hadamard inequality, special means, time scale
Procedia PDF Downloads 15019151 Electrochemical Studies of Si, Si-Ge- and Ge-Air Batteries
Authors: R. C. Sharma, Rishabh Bansal, Prajwal Menon, Manoj K. Sharma
Abstract:
Silicon-air battery is highly promising for electric vehicles due to its high theoretical energy density (8470 Whkg⁻¹) and its discharge products are non-toxic. For the first time, pure silicon and germanium powders are used as anode material. Nickel wire meshes embedded with charcoal and manganese dioxide powder as cathode and concentrated potassium hydroxide is used as electrolyte. Voltage-time curves have been presented in this study for pure silicon and germanium powder and 5% and 10% germanium with silicon powder. Silicon powder cell assembly gives a stable voltage of 0.88 V for ~20 minutes while Si-Ge provides cell voltage of 0.80-0.76 V for ~10-12 minutes, and pure germanium cell provides cell voltage 0.80-0.76 V for ~30 minutes. The cell voltage is higher for concentrated (10%) sodium hydroxide solution (1.08 V) and it is stable for ~40 minutes. A sharp decrease in cell voltage beyond 40 min may be due to rapid corrosion.Keywords: Silicon-air battery, Germanium-air battery, voltage-time curve, open circuit voltage, Anodic corrosion
Procedia PDF Downloads 23819150 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 14319149 Drug and Poison Information Centers: An Emergent Need of Health Care Professionals in Pakistan
Authors: Asif Khaliq, Sayeeda A. Sayed
Abstract:
The drug information centers provide drug related information to the requesters that include physicians, pharmacist, nurses and other allied health care professionals. The International Pharmacist Federation (FIP) describes basic functions of a drug and poison information centers as drug evaluation, therapeutic counseling, pharmaceutical advice, research, pharmaco-vigilence and toxicology. Continuous advancement in the field of medicine has expanded the medical literature, which has increased demand of a drug and poison information center for the guidance, support and facilitation of physicians. The objective of the study is to determine the need of drug and poison information centers in public and private hospitals of Karachi, Pakistan. A cross sectional study was conducted during July 2013 to April 2014 using a self-administered, multi-itemed questionnaire. Non Probability Convenient sampling was used to select the study participants. A total of 307 physicians from public and private hospitals of Karachi participated in the study. The need for 24/7 Drug and poison information center was highlighted by 92 % of physicians and 67% physicians suggested opening a drug information center at the hospital. It was reported that 70% physicians take at least 15 minutes for searching the information about the drug while managing a case. Regarding the poisoning case management, 52% physicians complaint about the unavailability of medicines in hospitals; and mentioned the importance of medicines for safe and timely management of patients. Although 73% physicians attended continued medical education (CME) sessions, 92 % physicians insisted on the need of 24/7 Drug and poison information center. The scarcity of organized channel for obtaining the information about drug and poisons is one of the most crucial problems for healthcare workers in Pakistan. The drug and poison information center is an advisory body that assists health care professional and patients in provision of appropriate drug and hazardous substance information. Drug and poison information center is one of the integral needs for running an effective health care system. Provision of a 24 /7 drug information centers with specialized staff offer multiple benefits to the hospitals while reducing treatment delays, addressing awareness gaps of all stakeholders and ensuring provision of quality health care.Keywords: drug and poison information centers, Pakistan, physicians, public and private hospitals
Procedia PDF Downloads 32819148 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery
Authors: Maximilian Engelhardt, Stephan Seeck
Abstract:
The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay
Procedia PDF Downloads 10819147 Measurement of Solids Concentration in Hydrocyclone Using ERT: Validation Against CFD
Authors: Vakamalla Teja Reddy, Narasimha Mangadoddy
Abstract:
Hydrocyclones are used to separate particles into different size fractions in the mineral processing, chemical and metallurgical industries. High speed video imaging, Laser Doppler Anemometry (LDA), X-ray and Gamma ray tomography are previously used to measure the two-phase flow characteristics in the cyclone. However, investigation of solids flow characteristics inside the cyclone is often impeded by the nature of the process due to slurry opaqueness and solid metal wall vessels. In this work, a dual-plane high speed Electrical resistance tomography (ERT) is used to measure hydrocyclone internal flow dynamics in situ. Experiments are carried out in 3 inch hydrocyclone for feed solid concentrations varying in the range of 0-50%. ERT data analysis through the optimized FEM mesh size and reconstruction algorithms on air-core and solid concentration tomograms is assessed. Results are presented in terms of the air-core diameter and solids volume fraction contours using Maxwell’s equation for various hydrocyclone operational parameters. It is confirmed by ERT that the air core occupied area and wall solids conductivity levels decreases with increasing the feed solids concentration. Algebraic slip mixture based multi-phase computational fluid dynamics (CFD) model is used to predict the air-core size and the solid concentrations in the hydrocyclone. Validation of air-core size and mean solid volume fractions by ERT measurements with the CFD simulations is attempted.Keywords: air-core, electrical resistance tomography, hydrocyclone, multi-phase CFD
Procedia PDF Downloads 37919146 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 27519145 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 16819144 A Non-linear Damage Model For The Annulus Of the Intervertebral Disc Under Cyclic Loading, Including Recovery
Authors: Shruti Motiwale, Xianlin Zhou, Reuben H. Kraft
Abstract:
Military and sports personnel are often required to wear heavy helmets for extended periods of time. This leads to excessive cyclic loads on the neck and an increased chance of injury. Computational models offer one approach to understand and predict the time progression of disc degeneration under severe cyclic loading. In this paper, we have applied an analytic non-linear damage evolution model to estimate damage evolution in an intervertebral disc due to cyclic loads over decade-long time periods. We have also proposed a novel strategy for inclusion of recovery in the damage model. Our results show that damage only grows 20% in the initial 75% of the life, growing exponentially in the remaining 25% life. The analysis also shows that it is crucial to include recovery in a damage model.Keywords: cervical spine, computational biomechanics, damage evolution, intervertebral disc, continuum damage mechanics
Procedia PDF Downloads 568