Search results for: time series fractal analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40640

Search results for: time series fractal analysis

40040 Performance of Partially Covered N Number of Photovoltaic Thermal (PVT) - Compound Parabolic Concentrator (CPC) Series Connected Water Heating System

Authors: Rohit Tripathi, Sumit Tiwari, G. N. Tiwari

Abstract:

In present study, an approach is adopted where photovoltaic thermal flat plate collector is integrated with compound parabolic concentrator. Analytical expression of temperature dependent electrical efficiency of N number of partially covered Photovoltaic Thermal (PVT) - Compound Parabolic Concentrator (CPC) water collector connected in series has been derived with the help of basic thermal energy balance equations. Analysis has been carried for winter weather condition at Delhi location, India. Energy and exergy performance of N - partially covered Photovoltaic Thermal (PVT) - Compound Parabolic Concentrator (CPC) Water collector system has been compared for two cases: (i) 25% area of water collector covered by PV module, (ii) 75% area of water collector covered by PV module. It is observed that case (i) has been best suited for thermal performance and case (ii) for electrical energy as well as overall exergy.

Keywords: compound parabolic concentrator, energy, photovoltaic thermal, temperature dependent electrical efficiency

Procedia PDF Downloads 401
40039 Roasting Degree of Cocoa Beans by Artificial Neural Network (ANN) Based Electronic Nose System and Gas Chromatography (GC)

Authors: Juzhong Tan, William Kerr

Abstract:

Roasting is one critical procedure in chocolate processing, where special favors are developed, moisture content is decreased, and better processing properties are developed. Therefore, determination of roasting degree of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products, and it also decides the commercial value of cocoa beans collected from cocoa farmers. The roasting degree of cocoa beans currently relies on human specialists, who sometimes are biased, and chemical analysis, which take long time and are inaccessible to many manufacturers and farmers. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was used to detecting the gas generated by cocoa beans with a different roasting degree (0min, 20min, 30min, and 40min) and the signals collected by gas sensors were used to train a three-layers ANN. Chemical analysis of the graded beans was operated by traditional GC-MS system and the contents of volatile chemical compounds were used to train another ANN as a reference to electronic nosed signals trained ANN. Both trained ANN were used to predict cocoa beans with a different roasting degree for validation. The best accuracy of grading achieved by electronic nose signals trained ANN (using signals from TGS 813 826 820 880 830 2620 2602 2610) turned out to be 96.7%, however, the GC trained ANN got the accuracy of 83.8%.

Keywords: artificial neutron network, cocoa bean, electronic nose, roasting

Procedia PDF Downloads 230
40038 Analysis and Modeling of Vibratory Signals Based on LMD for Rolling Bearing Fault Diagnosis

Authors: Toufik Bensana, Slimane Mekhilef, Kamel Tadjine

Abstract:

The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally non-stationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. the results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, local mean decomposition, rolling element bearing, vibration analysis

Procedia PDF Downloads 400
40037 A Data-Mining Model for Protection of FACTS-Based Transmission Line

Authors: Ashok Kalagura

Abstract:

This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.

Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC

Procedia PDF Downloads 420
40036 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 285
40035 Glacier Dynamics and Mass Fluctuations in Western Himalayas: A Comparative Analysis of Pir-Panjal and Greater Himalayan Ranges in Jhelum Basin, India

Authors: Syed Towseef Ahmad, Fatima Amin, Pritha Acharya, Anil K. Gupta, Pervez Ahmad

Abstract:

Glaciers being the sentinels of climate change, are the most visible evidence of global warming. Given the unavailability of observed field-based data, this study has focussed on the use of geospatial techniques to obtain information about the glaciers of Pir-Panjal (PPJ) and the Great Himalayan Regions of Jhelum Basin (GHR). These glaciers need to be monitored in line with the variations in climatic conditions because they significantly contribute to various sectors in the region. The main aim of this study is to map the glaciers in the two adjacent regions (PPJ and GHR) in the north-western Himalayas with different topographies and compare the changes in various glacial attributes during two different time periods (1990-2020). During the last three decades, both PPJ as well as GHR regions have observed deglaciation of around 36 and 26 percent, respectively. The mean elevation of GHR glaciers has increased from 4312 to 4390 masl, while the same for PPJ glaciers has increased from 4085 to 4124 masl during the observation period. Using accumulation area ratio (AAR) method, mean mass balance of -34.52 and -37.6 cm.w.e was recorded for the glaciers of GHR and PPJ, respectively. The difference in areal and mass loss of glaciers in these regions may be due to (i) the smaller size of PPJ glaciers which are all smaller than 1 km² and are thus more responsive to climate change (ii) Higher mean elevation of GHR glaciers (iii) local variations in climatic variables in these glaciated regions. Time series analysis of climate variables indicates that both the mean maximum and minimum temperatures of Qazigund station (Tmax= 19.2, Tmin= 6.4) are comparatively higher than the Pahalgam station (Tmax= 18.8, Tmin= 3.2). Except for precipitation in Qazigund (Slope= - 0.3 mm a⁻¹), each climatic parameter has shown an increasing trend during these three decades, and with the slope of 0.04 and 0.03°c a⁻¹, the positive trend in Tmin (pahalgam) and Tmax (qazigund) are observed to be statistically significant (p≤0.05).

Keywords: glaciers, climate change, Pir-Panjal, greater Himalayas, mass balance

Procedia PDF Downloads 77
40034 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea

Abstract:

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Keywords: Bjerksund and Stensland approximations, computational analysis, finance, options pricing, numerical methods

Procedia PDF Downloads 449
40033 Memory, Self, and Time: A Bachelardian Perspective

Authors: Michael Granado

Abstract:

The French philosopher Gaston Bachelard’s philosophy of time is articulated in his two works on the subject, the Intuition of the Instant (1932) and his The Dialectic of Duration (1936). Both works present a systematic methodology predicated upon the assumption that our understanding of time has radically changed as a result of Einstein and subsequently needs to be reimagined. Bachelard makes a major distinction in his discussion of time: 1. Time as it is (physical time), 2. Time as we experience it (phenomenological time). This paper will focus on the second distinction, phenomenological time, and explore the connections between Bachelard’s work and contemporary psychology. Several aspects of Bachelard’s philosophy of time nicely complement our current understanding of memory and self and clarify how the self relates to experienced time. Two points, in particular, stand out; the first is the relative nature of subjective time, and the second is the implications of subjective time in the formation of the narrative self. Bachelard introduces two philosophical concepts to explain these points: rhythmanalysis and reverie. By exploring these concepts, it will become apparent that there is an undeniable link between memory, self, and time. Through the use of narrative self, the individual connects and links memories and time together to form a sense of personal identity.

Keywords: Gaston Bachelard, memory, self, time

Procedia PDF Downloads 157
40032 Cardio Autonomic Response during Mental Stress in the Wards of Normal and Hypertensive Parents

Authors: Sheila R. Pai, Rekha D. Kini, Amrutha Mary

Abstract:

Objective: To assess and compare the cardiac autonomic activity after mental stress among the wards of normal and hypertensive parents. Methods: The study included 67 subjects, 30 of them had a parental history of hypertension and rest 37 had normotensive parents. Subjects were divided into control group (wards of normotensive parents) and Study group (wards of hypertensive parents). The height, weight were noted, and Body Mass Index (BMI) was also calculated. The mental stress test was carried out. Blood pressure (BP) and electro cardiogram (ECG) was recorded during normal breathing and after mental stress test. Heart rate variability (HRV) analysis was done by time domain method HRV was recorded and analyzed by the time-domain method. Analysis of HRV in the time-domain was done using the software version 1.1 AIIMS, New Delhi. The data obtained was analyzed using student’s t-test followed by Mann-Whitney U-test and P < 0.05 was considered significant. Results: There was no significant difference in systolic blood pressure and diastolic blood pressure (DBP) between study group and control group following mental stress. In the time domain analysis, the mean value of pNN50 and RMSSD of the study group was not significantly different from the control group after the mental stress test. Conclusion: The study thus concluded that there was no significant difference in HRV between study group and control group following mental stress.

Keywords: heart rate variability, time domain analysis, mental stress, hypertensive

Procedia PDF Downloads 267
40031 Resource Constrained Time-Cost Trade-Off Analysis in Construction Project Planning and Control

Authors: Sangwon Han, Chengquan Jin

Abstract:

Time-cost trade-off (TCTO) is one of the most significant part of construction project management. Despite the significance, current TCTO analysis, based on the Critical Path Method, does not consider resource constraint, and accordingly sometimes generates an impractical and/or infeasible schedule planning in terms of resource availability. Therefore, resource constraint needs to be considered when doing TCTO analysis. In this research, genetic algorithms (GA) based optimization model is created in order to find the optimal schedule. This model is utilized to compare four distinct scenarios (i.e., 1) initial CPM, 2) TCTO without considering resource constraint, 3) resource allocation after TCTO, and 4) TCTO with considering resource constraint) in terms of duration, cost, and resource utilization. The comparison results identify that ‘TCTO with considering resource constraint’ generates the optimal schedule with the respect of duration, cost, and resource. This verifies the need for consideration of resource constraint when doing TCTO analysis. It is expected that the proposed model will produce more feasible and optimal schedule.

Keywords: time-cost trade-off, genetic algorithms, critical path, resource availability

Procedia PDF Downloads 180
40030 Effect of Springback Analysis on Influences of the Steel Demoulding Using FEM

Authors: Byeong-Sam Kim, Jongmin Park

Abstract:

The present work is motivated by the industrial challenge to produce complex composite shapes cost-effectively. The model used an anisotropical thermoviscoelastic is analyzed by an implemented finite element solver. The stress relaxation can be constructed by Prony series for the nonlinear thermoviscoelastic model. The calculation of process induced internal stresses relaxation during the cooling stage of the manufacturing cycle was carried out by the spring back phenomena observed from the part containing a cylindrical segment. The finite element results obtained from the present formulation are compared with experimental data, and the results show good correlations.

Keywords: thermoviscoelastic, springback phenomena, FEM analysis, thermoplastic composite structures

Procedia PDF Downloads 355
40029 New Series Input Parallel Output LLC DC/DC Converter with the Input Voltage Balancing Capacitor for the Electric System of Electric Vehicles

Authors: Kang Hyun Yi

Abstract:

This paper presents a new parallel output LLC DC/DC converter for electric vehicle. The electric vehicle has two batteries. One is a high voltage battery for the powertrain of the vehicle and the other is a low voltage battery for the vehicle electric system. The low voltage is charged from the high voltage battery and the high voltage input and the high current output DC/DC converter is needed. Therefore, the new LLC converter with the input voltage compensation is proposed for the high voltage input and the low voltage output DC/DC converter. The proposed circuit has two LLC converters with the series input voltage from the battery for the powertrain and the parallel output low battery voltage for the vehicle electric system because the battery voltage for the powertrain and the electric power for the vehicle become high. Also, the input series voltage compensation capacitor is used for balancing the input current in the two LLC converters. The proposed converter has an equal electric stress of the semiconductor parts and the reactive components, high efficiency and good heat dissipation.

Keywords: electric vehicle, LLC DC/DC converter, input voltage balancing, parallel output

Procedia PDF Downloads 1045
40028 Serial Position Curves under Compressively Expanding and Contracting Schedules of Presentation

Authors: Priya Varma, Denis John McKeown

Abstract:

Psychological time, unlike physical time, is believed to be ‘compressive’ in the sense that the mental representations of a series of events may be internally arranged with ever decreasing inter-event spacing (looking back from the most recently encoded event). If this is true, the record within immediate memory of recent events is severely temporally distorted. Although this notion of temporal distortion of the memory record is captured within some theoretical accounts of human forgetting, notably temporal distinctiveness accounts, the way in which the fundamental nature of the distortion underpins memory and forgetting broadly is barely recognised or at least directly investigated. Our intention here was to manipulate the spacing of items for recall in order to ‘reverse’ this supposed natural compression within the encoding of the items. In Experiment 1 three schedules of presentation (expanding, contracting and fixed irregular temporal spacing) were created using logarithmic spacing of the words for both free and serial recall conditions. The results of recall of lists of 7 words showed statistically significant benefits of temporal isolation, and more excitingly the contracting word series (which we may think of as reversing the natural compression within the mental representation of the word list) showed best performance. Experiment 2 tested for effects of active verbal rehearsal in the recall task; this reduced but did not remove the benefits of our temporal scheduling manipulation. Finally, a third experiment used the same design but with Chinese characters as memoranda, in a further attempt to subvert possible verbal maintenance of items. One change to the design here was to introduce a probe item following the sequence of items and record response times to this probe. Together the outcomes of the experiments broadly support the notion of temporal compression within immediate memory.

Keywords: memory, serial position curves, temporal isolation, temporal schedules

Procedia PDF Downloads 209
40027 The Impact of Agricultural Product Export on Income and Employment in Thai Economy

Authors: Anucha Wittayakorn-Puripunpinyoo

Abstract:

The research objectives were 1) to study the situation and its trend of agricultural product export of Thailand 2) to study the impact of agricultural product export on income of Thai economy 3) the impact of agricultural product export on employment of Thai economy and 4) to find out the recommendations of agricultural product export policy of Thailand. In this research, secondary data were collected as yearly time series data from 1990 to 2016 accounted for 27 years. Data were collected from the Bank of Thailand database. Primary data were collected from the steakholders of agricultural product export policy of Thailand. Data analysis was applied descriptive statistics such as arithmetic mean, standard deviation. The forecasting of agricultural product was applied Mote Carlo Simulation technique as well as time trend analysis. In addition, the impact of agricultural product export on income and employment by applying econometric model while the estimated parameters were utilized the ordinary least square technique. The research results revealed that 1) agricultural product export value of Thailand from 1990 to 2016 was 338,959.5 Million Thai baht with its growth rate of 4.984 percent yearly, in addition, the forecasting of agricultural product export value of Thailand has increased but its growth rate has been declined 2) the impact of agricultural product export has positive impact on income in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.0051 percent 3) the impact of agricultural product export has positive impact on employment in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.079 percent and 4) in the future, agricultural product export policy would focused on finished or semi-finished agricultural product instead of raw material by applying technology and innovation in to make value added of agricultural product export. The public agricultural product export policy would support exporters in private sector in order to encourage them as agricultural exporters in Thailand.

Keywords: agricultural product export, income, employment, Thai economy

Procedia PDF Downloads 302
40026 Quality Evaluation of Backfill Grout in Tunnel Boring Machine Tail Void Using Impact-Echo (IE): Short-Time Fourier Transform (STFT) Numerical Analysis

Authors: Ju-Young Choi, Ki-Il Song, Kyoung-Yul Kim

Abstract:

During Tunnel Boring Machine (TBM) tunnel excavation, backfill grout should be injected after the installation of segment lining to ensure the stability of the tunnel and to minimize ground deformation. If grouting is not sufficient to fill the gap between the segments and rock mass, hydraulic pressures occur in the void, which can negatively influence the stability of the tunnel. Recently the tendency to use TBM tunnelling method to replace the drill and blast(NATM) method is increasing. However, there are only a few studies of evaluation of backfill grout. This study evaluates the TBM tunnel backfill state using Impact-Echo(IE). 3-layers, segment-grout-rock mass, are simulated by FLAC 2D, FDM-based software. The signals obtained from numerical analysis and IE test are analyzed by Short-Time Fourier Transform(STFT) in time domain, frequency domain, and time-frequency domain. The result of this study can be used to evaluate the quality of backfill grouting in tail void.

Keywords: tunnel boring machine, backfill grout, impact-echo method, time-frequency domain analysis, finite difference method

Procedia PDF Downloads 256
40025 The Documentary Analysis of Meta-Analysis Research in Violence of Media

Authors: Proud Arunrangsiwed

Abstract:

The part of “future direction” in the findings of meta-analysis could provide the great direction to conduct the future studies. This study, “The Documentary Analysis of Meta-Analysis Research in Violence of Media” would conclude “future directions” out of 10 meta-analysis papers. The purposes of this research are to find an appropriate research design or an appropriate methodology for the future research related to the topic, “violence of media”. Further research needs to explore by longitudinal and experimental design, and also needs to have a careful consideration about age effects, time spent effects, enjoyment effects, and ordinary lifestyle of each media consumer.

Keywords: aggressive, future direction, meta-analysis, media, violence

Procedia PDF Downloads 403
40024 Resilient Analysis as an Alternative to Conventional Seismic Analysis Methods for the Maintenance of a Socioeconomical Functionality of Structures

Authors: Sara Muhammad Elqudah, Vigh László Gergely

Abstract:

Catastrophic events, such as earthquakes, are sudden, short, and devastating, threatening lives, demolishing futures, and causing huge economic losses. Current seismic analyses and design standards are based on life safety levels where only some residual strength and stiffness are left in the structure leaving it beyond economical repair. Consequently, it has become necessary to introduce and implement the concept of resilient design. Resilient design is about designing for ductility over time by resisting, absorbing, and recovering from the effects of a hazard in an appropriate and efficient time manner while maintaining the functionality of the structure in the aftermath of the incident. Resilient analysis is mainly based on the fragility, vulnerability, and functionality curves where eventually a resilience index is generated from these curves, and the higher this index is, the better is the performance of the structure. In this paper, seismic performances of a simple two story reinforced concrete building, located in a moderate seismic region, has been evaluated using the conventional seismic analyses methods, which are the linear static analysis, the response spectrum analysis, and the pushover analysis, and the generated results of these analyses methods are compared to those of the resilient analysis. Results highlight that the resilience analysis was the most convenient method in generating a more ductile and functional structure from a socio-economic perspective, in comparison to the standard seismic analysis methods.

Keywords: conventional analysis methods, functionality, resilient analysis, seismic performance

Procedia PDF Downloads 105
40023 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 408
40022 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 129
40021 Selective Extraction of Couple Nickel(II) / Cobalt(II) by a Series of Schiff Bases in Sulfate Medium, in the Chloroforme-Water

Authors: N. Belhadj, M. Hadj Youcef, T. Benabdallah, Belbachir Ibtissem, N. Boceiri

Abstract:

This work deals with the synthesis, the structural elucidation and the exploration the extracting properties of a series of ortho-hydroxy Schiff base in sulfate medium. After the synthesis and characterization of their structures, the study of their behavior in solution was carried out by pH-metric titration in different media homogeneous and heterogeneous solution. This allowed to explore and to quantify in each of these media, some of their properties in solution such as, their acid-base behavior (determination and comparison of pKa), their distribution powers (determination and comparison of logKd), and their thermodynamic constants (determining ∆H°, ΔS° and ∆G°moy) by optimizing both the temperature and ionic strength. Study of the extraction of nickel (II) and cobalt(II) separately was undertaken in the aqueous-organic system, chloroform-water. Different extraction parameters have been thus optimized such, the pH, the concentration of extractant and the ionic strength, and the extraction constants established in each case. The extracted metal complexes have been isolated and their spatial configurations elucidated. The selective extraction of the couple cobalt (II)/nickel (II) was finally performed by our series of Schiff base in the chloroforme/water.

Keywords: selective extraction, Schiff base, distribution, cobalt(II), nickel(II)

Procedia PDF Downloads 455
40020 Congestion Mitigation on an Urban Arterial through Infrastructure Intervention

Authors: Attiq Ur Rahman Dogar, Sohaib Ishaq

Abstract:

Pakistan had experienced rapid motorization in the last decade. Due to the soft leasing schemes of banks and increase in average household income, even the middle class can now afford cars. The public transit system is inadequate and sparse. Due to these reasons, traffic demand on urban arterials has increased manifold. Poor urban transit planning and aging transportation systems have resulted in traffic congestion. The focus of this study is to improve traffic flow on a section of N-5 passing through the Rawalpindi downtown. Present efforts aim to carry out the analysis of traffic conditions on this section and to investigate the impact of traffic signal co-ordination on travel time. In addition to signal co-ordination, we also examined the effect of different infrastructure improvements on the travel time. After the economic analysis of alternatives and discussions, the improvement plan for Rawalpindi downtown urban arterial section is proposed for implementation.

Keywords: signal coordination, infrastructure intervention, infrastructure improvement, cycle length, fuel consumption cost, travel time cost, economic analysis, travel time, Rawalpindi, Pakistan, traffic signals

Procedia PDF Downloads 310
40019 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same

Authors: Melanie Tang

Abstract:

According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.

Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time

Procedia PDF Downloads 143
40018 Convergence Analysis of Cubic B-Spline Collocation Method for Time Dependent Parabolic Advection-Diffusion Equations

Authors: Bharti Gupta, V. K. Kukreja

Abstract:

A comprehensive numerical study is presented for the solution of time-dependent advection diffusion problems by using cubic B-spline collocation method. The linear combination of cubic B-spline basis, taken as approximating function, is evaluated using the zeros of shifted Chebyshev polynomials as collocation points in each element to obtain the best approximation. A comparison, on the basis of efficiency and accuracy, with the previous techniques is made which confirms the superiority of the proposed method. An asymptotic convergence analysis of technique is also discussed, and the method is found to be of order two. The theoretical analysis is supported with suitable examples to show second order convergence of technique. Different numerical examples are simulated using MATLAB in which the 3-D graphical presentation has taken at different time steps as well as different domain of interest.

Keywords: cubic B-spline basis, spectral norms, shifted Chebyshev polynomials, collocation points, error estimates

Procedia PDF Downloads 216
40017 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm

Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene

Abstract:

Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.

Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest

Procedia PDF Downloads 105
40016 Lamb Wave-Based Blood Coagulation Measurement System Using Citrated Plasma

Authors: Hyunjoo Choi, Jeonghun Nam, Chae Seung Lim

Abstract:

Acoustomicrofluidics has gained much attention due to the advantages, such as noninvasiveness and easy integration with other miniaturized systems, for clinical and biological applications. However, a limitation of acoustomicrofluidics is the complicated and costly fabrication process of electrodes. In this study, we propose a low-cost and lithography-free device using Lamb wave for blood analysis. Using a Lamb wave, calcium ion-removed blood plasma and coagulation reagents can be rapidly mixed for blood coagulation test. Due to the coagulation process, the viscosity of the sample increases and the viscosity change can be monitored by internal acoustic streaming of microparticles suspended in the sample droplet. When the acoustic streaming of particles stops by the viscosity increase is defined as the coagulation time. With the addition of calcium ion at 0-25 mM, the coagulation time was measured and compared with the conventional index for blood coagulation analysis, prothrombin time, which showed highly correlated with the correlation coefficient as 0.94. Therefore, our simple and cost-effective Lamb wave-based blood analysis device has the powerful potential to be utilized in clinical settings.

Keywords: acoustomicrofluidics, blood analysis, coagulation, lamb wave

Procedia PDF Downloads 333
40015 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 90
40014 Double Fourier Series Applied to Supraharmonic Determination: The Specific Cases of a Boost and an Interleaved Boost Converter Used as Active Power Factor Correctors

Authors: Erzen Muharemi, Emmanuel De Jaeger, Jos Knockaert

Abstract:

The work presented here investigates the modeling of power electronics converters in terms of their harmonic production. Specifically, it addresses high-frequency emissions in the range of 2-150 kHz, referred to as supraharmonics. This paper models a conventional converter, namely the boost converter used as an active power factor corrector (APFC). Furthermore, the modeling is extended to the case of the interleaved boost converter, which offers advantages such as halving the emissions. Finally, a comparison between the theoretical, numerical, and experimental results will be provided.

Keywords: APFC, boost converter, converter modeling, double fourier series, supraharmonics

Procedia PDF Downloads 27
40013 Real-Time Observation of Concentration Distribution for Mix Liquids including Water in Micro Fluid Channel with Near-Infrared Spectroscopic Imaging Method

Authors: Hiroki Takiguchi, Masahiro Furuya, Takahiro Arai

Abstract:

In order to quantitatively comprehend thermal flow for some industrial applications such as nuclear and chemical reactors, detailed measurements for temperature and abundance (concentration) of materials at high temporal and spatial resolution are required. Additionally, rigorous evaluation of the size effect is also important for practical realization. This paper introduces a real-time spectroscopic imaging method in micro scale field, which visualizes temperature and concentration distribution of a liquid or mix liquids with near-infrared (NIR) wavelength region. This imaging principle is based on absorption of pre-selected narrow band from absorption spectrum peak or its dependence property of target liquid in NIR region. For example, water has a positive temperature sensitivity in the wavelength at 1905 nm, therefore the temperature of water can be measured using the wavelength band. In the experiment, the real-time imaging observation of concentration distribution in micro channel was demonstrated to investigate the applicability of micro-scale diffusion coefficient and temperature measurement technique using this proposed method. The effect of thermal diffusion and binary mutual diffusion was evaluated with the time-series visualizations of concentration distribution.

Keywords: near-infrared spectroscopic imaging, micro fluid channel, concentration distribution, diffusion phenomenon

Procedia PDF Downloads 155
40012 Synthesis of Polystyrene Grafted Filler Nanoparticles: Effect of Grafting on Mechanical Reinforcement

Authors: M. Khlifa, A. Youssef, A. F. Zaed, A. Kraft, V. Arrighi

Abstract:

A series of PS-nanoparticles were prepared by grafting PS from both aggregated silica and colloidally silica using atom-transfer radical polymerisation (ATRP). The mechanical behaviour of the nanocomposites have been examined by differential scanning calorimetry (DSC)and dynamic mechanical thermal analysis (DMTA).

Keywords: ATRP, nanocomposites, polystyrene, reinforcement

Procedia PDF Downloads 616
40011 Factors Affecting Time Performance in Building Construction Projects

Authors: Ibraheem A. K. Mahameed

Abstract:

The aim of this study is to identify the risks affecting time performance of building construction projects in the West Bank in Palestine from contractors’ viewpoint. 38 risks that might affect time performance of building construction projects were defined through a detailed literature review. These risks have been classified into 6 groups: project, managerial, consultant, financial, external, and construction items. A questionnaire survey was performed to rank the considered risks in terms of severity and frequency. The analysis of the survey indicated that the top five risks affecting time performance of building construction projects in Palestine are: award project to the lowest price, political situation, poor communication and coordination between construction parties, change orders, and financial status of contractor.

Keywords: delay, time performance, construction, building

Procedia PDF Downloads 460