Search results for: temporal sequences.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 587

Search results for: temporal sequences.

107 Estimation of Forest Fire Emission in Thailand by Using Remote Sensing Information

Authors: A. Junpen, S. Garivait, S. Bonnet, A. Pongpullponsak

Abstract:

The forest fires in Thailand are annual occurrence which is the cause of air pollutions. This study intended to estimate the emission from forest fire during 2005-2009 using MODerateresolution Imaging Spectro-radiometer (MODIS) sensor aboard the Terra and Aqua satellites, experimental data, and statistical data. The forest fire emission is estimated using equation established by Seiler and Crutzen in 1982. The spatial and temporal variation of forest fire emission is analyzed and displayed in the form of grid density map. From the satellite data analysis suggested between 2005 and 2009, the number of fire hotspots occurred 86,877 fire hotspots with a significant highest (more than 80% of fire hotspots) in the deciduous forest. The peak period of the forest fire is in January to May. The estimation on the emissions from forest fires during 2005 to 2009 indicated that the amount of CO, CO2, CH4, and N2O was about 3,133,845 tons, 47,610.337 tons, 204,905 tons, and 6,027 tons, respectively, or about 6,171,264 tons of CO2eq. They also emitted 256,132 tons of PM10. The year 2007 was found to be the year when the emissions were the largest. Annually, March is the period that has the maximum amount of forest fire emissions. The areas with high density of forest fire emission were the forests situated in the northern, the western, and the upper northeastern parts of the country.

Keywords: Emissions, Forest fire, Remote sensing information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
106 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904
105 Overloading Scheme for Cellular DS-CDMA using Quasi-Orthogonal Sequences and Iterative Interference Cancellation Receiver

Authors: Preetam Kumar, Saswat Chakrabarti

Abstract:

Overloading is a technique to accommodate more number of users than the spreading factor N. This is a bandwidth efficient scheme to increase the number users in a fixed bandwidth. One of the efficient schemes to overload a CDMA system is to use two sets of orthogonal signal waveforms (O/O). The first set is assigned to the N users and the second set is assigned to the additional M users. An iterative interference cancellation technique is used to cancel interference between the two sets of users. In this paper, the performance of an overloading scheme in which the first N users are assigned Walsh-Hadamard orthogonal codes and extra users are assigned the same WH codes but overlaid by a fixed (quasi) bent sequence [11] is evaluated. This particular scheme is called Quasi- Orthogonal Sequence (QOS) O/O scheme, which is a part of cdma2000 standard [12] to provide overloading in the downlink using single user detector. QOS scheme are balance O/O scheme, where the correlation between any set-1 and set-2 users are equalized. The allowable overload of this scheme is investigated in the uplink on an AWGN and Rayleigh fading channels, so that the uncoded performance with iterative multistage interference cancellation detector remains close to the single user bound. It is shown that this scheme provides 19% and 11% overloading with SDIC technique for N= 16 and 64 respectively, with an SNR degradation of less than 0.35 dB as compared to single user bound at a BER of 0.00001. But on a Rayleigh fading channel, the channel overloading is 45% (29 extra users) at a BER of 0.0005, with an SNR degradation of about 1 dB as compared to single user performance for N=64. This is a significant amount of channel overloading on a Rayleigh fading channel.

Keywords: DS-CDMA, Iterative Interference CancellationOrthogonal codes, Overloading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
104 Detection of Cyberattacks on the Metaverse Based on First-Order Logic

Authors: Sulaiman Al Amro

Abstract:

There are currently considerable challenges concerning data security and privacy, particularly in relation to modern technologies. This includes the virtual world known as the Metaverse, which consists of a virtual space that integrates various technologies, and therefore susceptible to cyber threats such as malware, phishing, and identity theft. This has led recent studies to propose the development of Metaverse forensic frameworks and the integration of advanced technologies, including machine learning for intrusion detection and security. In this context, the application of first-order logic offers a formal and systematic approach to defining the conditions of cyberattacks, thereby contributing to the development of effective detection mechanisms. In addition, formalizing the rules and patterns of cyber threats has the potential to enhance the overall security posture of the Metaverse and thus the integrity and safety of this virtual environment. The current paper focuses on the primary actions employed by avatars for potential attacks, including Interval Temporal Logic (ITL) and behavior-based detection to detect an avatar’s abnormal activities within the Metaverse. The research established that the proposed framework attained an accuracy of 92.307%, resulting in the experimental results demonstrating the efficacy of ITL, including its superior performance in addressing the threats posed by avatars within the Metaverse domain.

Keywords: Cyberattacks, detection, first-order logic, Metaverse, privacy, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66
103 Multivariate Output-Associative RVM for Multi-Dimensional Affect Predictions

Authors: Achut Manandhar, Kenneth D. Morton, Peter A. Torrione, Leslie M. Collins

Abstract:

The current trends in affect recognition research are to consider continuous observations from spontaneous natural interactions in people using multiple feature modalities, and to represent affect in terms of continuous dimensions, incorporate spatio-temporal correlation among affect dimensions, and provide fast affect predictions. These research efforts have been propelled by a growing effort to develop affect recognition system that can be implemented to enable seamless real-time human-computer interaction in a wide variety of applications. Motivated by these desired attributes of an affect recognition system, in this work a multi-dimensional affect prediction approach is proposed by integrating multivariate Relevance Vector Machine (MVRVM) with a recently developed Output-associative Relevance Vector Machine (OARVM) approach. The resulting approach can provide fast continuous affect predictions by jointly modeling the multiple affect dimensions and their correlations. Experiments on the RECOLA database show that the proposed approach performs competitively with the OARVM while providing faster predictions during testing.

Keywords: Dimensional affect prediction, Output-associative RVM, Multivariate regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
102 Interplay of Power Management at Core and Server Level

Authors: Jörg Lenhardt, Wolfram Schiffmann, Jörg Keller

Abstract:

While the feature sizes of recent Complementary Metal Oxid Semiconductor (CMOS) devices decrease the influence of static power prevails their energy consumption. Thus, power savings that benefit from Dynamic Frequency and Voltage Scaling (DVFS) are diminishing and temporal shutdown of cores or other microchip components become more worthwhile. A consequence of powering off unused parts of a chip is that the relative difference between idle and fully loaded power consumption is increased. That means, future chips and whole server systems gain more power saving potential through power-aware load balancing, whereas in former times this power saving approach had only limited effect, and thus, was not widely adopted. While powering off complete servers was used to save energy, it will be superfluous in many cases when cores can be powered down. An important advantage that comes with that is a largely reduced time to respond to increased computational demand. We include the above developments in a server power model and quantify the advantage. Our conclusion is that strategies from datacenters when to power off server systems might be used in the future on core level, while load balancing mechanisms previously used at core level might be used in the future at server level.

Keywords: Power efficiency, static power consumption, dynamic power consumption, CMOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
101 Recognition Machine (RM) for On-line and Isolated Flight Deck Officer (FDO) Gestures

Authors: Deniz T. Sodiri, Venkat V S S Sastry

Abstract:

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Keywords: On-line Recognition Algorithm, IsolatedDynamic/Static Gesture Recognition, On-line Markovian/DynamicProgramming, Training in Virtual Environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
100 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: Human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, Prior distribution and approximate posterior distribution, KTH dataset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004
99 Isolation and Screening of Laccase Producing Basidiomycetes via Submerged Fermentations

Authors: Mun Yee Chan, Sin Ming Goh, Lisa Gaik Ai Ong

Abstract:

Approximately 10,000 different types of dyes and pigments are being used in various industrial applications yearly, which include the textile and printing industries. However, these dyes are difficult to degrade naturally once they enter the aquatic system. Their high persistency in natural environment poses a potential health hazard to all form of life. Hence, there is a need for alternative dye removal strategy in the environment via bioremediation. In this study, fungi laccase is investigated via commercial agar dyes plates and submerged fermentation to explore the application of fungi laccase in textile dye wastewater treatment. Two locally isolated basidiomycetes were screened for laccase activity using media added with commercial dyes such as 2, 2-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid (ABTS), guaiacol and Remazol Brillant Blue R (RBBR). Isolate TBB3 (1.70±0.06) and EL2 (1.78±0.08) gave the highest results for ABTS plates with the appearance of greenish halo on around the isolates. Submerged fermentation performed on Isolate TBB3 with the productivity 3.9067 U/ml/day, whereas the laccase activity for Isolate EL2 was much lower (0.2097 U/ml/day). As isolate TBB3 showed higher laccase production, it was subjected to molecular characterization by DNA isolation, PCR amplification and sequencing of ITS region of nuclear ribosomal DNA. After being compared with other sequences in National Center for Biotechnology Information (NCBI database), isolate TBB3 is probably from species Trametes hirsutei. Further research work can be performed on this isolate by upscale the production of laccase in order to meet the demands of the requirement for higher enzyme titer for the bioremediation of textile dyes.

Keywords: Bioremediation, dyes, fermentation, laccase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2183
98 Development and Validation of the Response to Stressful Situations Scale in the General Population

Authors: C. Barreto Carvalho, C. da Motta, M. Sousa, J. Cabral, A. L. Carvalho, E. B. Peixoto

Abstract:

The aim of the current study was to develop and validate a Response to Stressful Situations Scale (RSSS) for the Portuguese population. This scale assesses the degree of stress experienced in scenarios that can constitute positive, negative and more neutral stressors, and also describes the physiological, emotional and behavioral reactions to those events according to their intensity. These scenarios include typical stressor scenarios relevant to patients with schizophrenia, which are currently absent from most scales, assessing specific risks that these stressors may bring on subjects, which may prove useful in non-clinical and clinical populations (i.e. Patients with mood or anxiety disorders, schizophrenia). Results from Principal Components Analysis and Confirmatory Factor Analysis of two adult samples from general population allowed to confirm a three-factor model with good fit indices: χ2 (144)= 370.211, p = 0.000; GFI = 0.928; CFI = 0.927; TLI = 0.914, RMSEA = 0.055, P(rmsea ≤0.005) = .096; PCFI = .781. Further data analysis of the scale revealed that RSSS is an adequate assessment tool of stress response in adults to be used in further research and clinical settings, with good psychometric characteristics, adequate divergent and convergent validity, good temporal stability and high internal consistency.

Keywords: Assessment, stress events, stress response, stress vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
97 Fuzzy Sequential Algorithm for Discrimination and Decision Maker in Sporting Events

Authors: Mourad Moussa, Ali Douik, Hassani Messaoud

Abstract:

Events discrimination and decision maker in sport field are the subject of many interesting studies in computer vision and artificial intelligence. A large volume of research has been conducted for automatic semantic event detection and summarization of sports videos. Indeed the results of these researches have a very significant contribution, as well to television broadcasts as to the football teams, since the result of sporting event can be reflected on the economic field. In this paper, we propose a novel fuzzy sequential technique which lead to discriminate events and specify the technico-tactics on going the game, nor the fuzzy system or the sequential one, may be able to respond to the asked question, in fact fuzzy process is not sufficient, it does not respect the chronological order according the time of various events, similarly the sequential process needs flexibility about the parameters used in this study, it may affect a membership degree of each parameter on the one hand and respect the sequencing of events for each frame on the other hand. Indeed this technique describes special events such as dribbling, headings, short sprints, rapid acceleration or deceleration, turning, jumping, kicking, ball occupation, and tackling according velocity vectors of the two players and the ball direction.

Keywords: Sequential process, Event detection, Soccer videos analysis, Fuzzy process, Spatio-temporal parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
96 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN tool, disaggregation, exceedance probability, Kolmogorov-Smirnov Test, rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006
95 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
94 Comparison of Artificial Neural Network and Multivariate Regression Methods in Prediction of Soil Cation Exchange Capacity

Authors: Ali Keshavarzi, Fereydoon Sarmadian

Abstract:

Investigation of soil properties like Cation Exchange Capacity (CEC) plays important roles in study of environmental reaserches as the spatial and temporal variability of this property have been led to development of indirect methods in estimation of this soil characteristic. Pedotransfer functions (PTFs) provide an alternative by estimating soil parameters from more readily available soil data. 70 soil samples were collected from different horizons of 15 soil profiles located in the Ziaran region, Qazvin province, Iran. Then, multivariate regression and neural network model (feedforward back propagation network) were employed to develop a pedotransfer function for predicting soil parameter using easily measurable characteristics of clay and organic carbon. The performance of the multivariate regression and neural network model was evaluated using a test data set. In order to evaluate the models, root mean square error (RMSE) was used. The value of RMSE and R2 derived by ANN model for CEC were 0.47 and 0.94 respectively, while these parameters for multivariate regression model were 0.65 and 0.88 respectively. Results showed that artificial neural network with seven neurons in hidden layer had better performance in predicting soil cation exchange capacity than multivariate regression.

Keywords: Easily measurable characteristics, Feed-forwardback propagation, Pedotransfer functions, CEC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2210
93 Adaptive Block State Update Method for Separating Background

Authors: Youngsuck Ji, Youngjoon Han, Hernsoo Hahn

Abstract:

In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.

Keywords: Block-state, Edge component, Reference backgroundi, Artificial illumination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320
92 Influence of Heterogeneous Traffic on the Roadside Fine (PM2.5 and PM1) and Coarse(PM10) Particulate Matter Concentrations in Chennai City, India

Authors: Srimuruganandam. B, S.M. Shiva Nagendra

Abstract:

In this paper the influence of heterogeneous traffic on the temporal variation of ambient PM10, PM2.5 and PM1 concentrations at a busy arterial route (Sardar Patel Road) in the Chennai city has been analyzed. The hourly PM concentration, traffic counts and average speed of the vehicles have been monitored at the study site for one week (19th-25th January 2009). Results indicated that the concentrations of coarse (PM10) and fine PM (PM2.5 and PM1) concentrations at SP road are having similar trend during peak and non-peak hours, irrespective of the days. The PM concentrations showed daily two peaks corresponding to morning (8 to 10 am) and evening (7 to 9 pm) peak hour traffic flow. The PM10 concentration is dominated by fine particles (53% of PM2.5 and 45% of PM1). The high PM2.5/PM10 ratio indicates that the majority of PM10 particles originate from re-suspension of road dust. The analysis of traffic flow at the study site showed that 2W, 3W and 4W are having similar diurnal trend as PM concentrations. This confirms that the 2W, 3W and 4W are the main emission source contributing to ambient PM concentration at SP road. The speed measurement at SP road showed that the average speed of 2W, 3W, 4W, LCV and HCV are 38, 40, 38, 40 and 38 km/hr and 43, 41, 42, 40 and 41 km/hr respectively for the weekdays and weekdays.

Keywords: particulate matter, heterogeneous traffic, fineparticles, coarse particles, vehicle speed, weekend and weekday.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
91 A New Intelligent, Dynamic and Real Time Management System of Sewerage

Authors: R. Tlili Yaakoubi, H. Nakouri, O. Blanpain, S. Lallahem

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.

Keywords: Automation, optimization, paradigm, RTC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
90 SolarSPELL Case Study: Pedagogical Quality Indicators to Evaluate Digital Library Resources

Authors: Lorena Alemán de la Garza, Marcela Georgina Gómez-Zermeño

Abstract:

This paper presents the SolarSPELL case study that aims to generate information on the use of indicators that help evaluate the pedagogical quality of a digital library resources. SolarSPELL is a solar-powered digital library with WiFi connectivity. It offers a variety of open educational resources selected for their potential for the digital transformation of educational practices and the achievement of the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States. The case study employed a quantitative methodology and the research instrument was applied to 55 teachers, directors and librarians. The results indicate that it is possible to strengthen the pedagogical quality of open educational resources, through actions focused on improving temporal and technological parameters. They also reveal that users believe that SolarSPELL improves the teaching-learning processes and motivates the teacher to improve his or her development. This study provides valuable information on a tool that supports teaching-learning processes and facilitates connectivity with renewable energies that improves the teacher training in active methodologies for ecosystem learning.

Keywords: Educational innovation, digital library, pedagogical quality, solar energy, teacher training, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
89 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
88 Use of Time-Depend Effects for Mixing and Separation of the Two-Phase Flows

Authors: N. B. Fedosenko, A.A Iatcenko, S.A. Levanov

Abstract:

The paper shows some ability to manage two-phase flows arising from the use of unsteady effects. In one case, we consider the condition of fragmentation of the interface between the two components leads to the intensification of mixing. The problem is solved when the temporal and linear scale are small for the appearance of the developed mixing layer. Showing that exist such conditions for unsteady flow velocity at the surface of the channel, which will lead to the creation and fragmentation of vortices at Re numbers of order unity. Also showing that the Re is not a criterion of similarity for this type of flows, but we can introduce a criterion that depends on both the Re, and the frequency splitting of the vortices. It turned out that feature of this situation is that streamlines behave stable, and if we analyze the behavior of the interface between the components it satisfies all the properties of unstable flows. The other problem we consider the behavior of solid impurities in the extensive system of channels. Simulated unsteady periodic flow modeled breaths. Consider the behavior of the particles along the trajectories. It is shown that, depending on the mass and diameter of the particles, they can be collected in a caustic on the channel walls, stop in a certain place or fly back. Of interest is the distribution of particle velocity in frequency. It turned out that by choosing a behavior of the velocity field of the carrier gas can affect the trajectory of individual particles including force them to fly back.

Keywords: Two-phase, mixing, separating, flow control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354
87 Multi-Criteria Decision Analysis in Planning of Asbestos-Containing Waste Management

Authors: E. Bruno, F. Lacarbonara, M. C. Placentino, D. Gramegna

Abstract:

Environmental decision making, particularly about hazardous waste management, is inherently exposed to a high potential conflict, principally because of the trade-off between sociopolitical, environmental, health and economic factors. The need to plan complex contexts has led to an increasing request for decision analytic techniques as support for the decision process. In this work, alternative systems of asbestos-containing waste management (ACW) in Puglia (Southern Italy) were explored by a multi-criteria decision analysis. In particular, through Analytic Hierarchy Process five alternatives management have been compared and ranked according to their performance and efficiency, taking into account environmental, health and socio-economic aspects. A separated valuation has been performed for different temporal scale. For short period results showed a narrow deviation between the disposal alternatives “mono-material landfill in public quarry" and “dedicate cells in existing landfill", with the best performance of the first one. While for long period “treatment plant to eliminate hazard from asbestos-containing waste" was prevalent, although high energy demand required to achieve the change of crystalline structure. A comparison with results from a participative approach in valuation process might be considered as future development of method application to ACW management.

Keywords: Multi-criteria decision analysis, Hazardous wastemanagement, Asbestos.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
86 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results is in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes.

Keywords: Semi-Lagrangian method, Iteration free method, Nonlinear advection-diffusion equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2492
85 Continuous Feature Adaptation for Non-Native Speech Recognition

Authors: Y. Deng, X. Li, C. Kwan, B. Raj, R. Stern

Abstract:

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Keywords: speaker adaptation; environment adaptation; robust speech recognition; SVD; non-native speech recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3216
84 Content and Resources based Mobile and Wireless Video Transcoding

Authors: Ashraf M. A. Ahmad

Abstract:

Delivering streaming video over wireless is an important component of many interactive multimedia applications running on personal wireless handset devices. Such personal devices have to be inexpensive, compact, and lightweight. But wireless channels have a high channel bit error rate and limited bandwidth. Delay variation of packets due to network congestion and the high bit error rate greatly degrades the quality of video at the handheld device. Therefore, mobile access to multimedia contents requires video transcoding functionality at the edge of the mobile network for interworking with heterogeneous networks and services. Therefore, to guarantee quality of service (QoS) delivered to the mobile user, a robust and efficient transcoding scheme should be deployed in mobile multimedia transporting network. Hence, this paper examines the challenges and limitations that the video transcoding schemes in mobile multimedia transporting network face. Then handheld resources, network conditions and content based mobile and wireless video transcoding is proposed to provide high QoS applications. Exceptional performance is demonstrated in the experiment results. These experiments were designed to verify and prove the robustness of the proposed approach. Extensive experiments have been conducted, and the results of various video clips with different bit rate and frame rate have been provided.

Keywords: Content, Object detection, Transcoding, Texture, Temporal, Video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
83 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments

Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard

Abstract:

With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.

Keywords: Activities of daily living, classification, internet of things, machine learning, smart home.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
82 Evaluating Urban Land Expansion Using Geographic Information System and Remote Sensing in Kabul City, Afghanistan

Authors: Ahmad Sharif Ahmadi, Yoshitaka Kajita

Abstract:

With massive population expansion and fast economic development in last decade, urban land has increasingly expanded and formed high informal development territory in Kabul city. This paper investigates integrated urbanization trends in Kabul city since the formation of the basic structure of the present city using GIS and remote sensing. This study explores the spatial and temporal difference of urban land expansion and land use categories among different time intervals, 1964-1978 and 1978-2008 from 1964 to 2008 in Kabul city. Furthermore, the goal of this paper is to understand the extent of urban land expansion and the factors driving urban land expansion in Kabul city. Many factors like population expansion, the return of refugees from neighboring countries and significant economic growth of the city affected urban land expansion. Across all the study area urban land expansion rate, population expansion rate and economic growth rate have been compared to analyze the relationship of driving forces with urban land expansion. Based on urban land change data detected by interpreting land use maps, it was found that in the entire study area the urban territory has been expanded by 14 times between 1964 and 2008.

Keywords: GIS, Kabul city, land use, urban land expansion, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
81 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
80 Numerical Simulation of Inviscid Transient Flows in Shock Tube and its Validations

Authors: Al-Falahi Amir, Yusoff M. Z, Yusaf T

Abstract:

The aim of this paper is to develop a new two dimensional time accurate Euler solver for shock tube applications. The solver was developed to study the performance of a newly built short-duration hypersonic test facility at Universiti Tenaga Nasional “UNITEN" in Malaysia. The facility has been designed, built, and commissioned for different values of diaphragm pressure ratios in order to get wide range of Mach number. The developed solver uses second order accurate cell-vertex finite volume spatial discretization and forth order accurate Runge-Kutta temporal integration and it is designed to simulate the flow process for similar driver/driven gases (e.g. air-air as working fluids). The solver is validated against analytical solution and experimental measurements in the high speed flow test facility. Further investigations were made on the flow process inside the shock tube by using the solver. The shock wave motion, reflection and interaction were investigated and their influence on the performance of the shock tube was determined. The results provide very good estimates for both shock speed and shock pressure obtained after diaphragm rupture. Also detailed information on the gasdynamic processes over the full length of the facility is available. The agreements obtained have been reasonable.

Keywords: shock tunnel, shock tube, shock wave, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
79 High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Authors: Faheem Ahmed, Fareed Ahmed, Yongheng Guo, Yong Yang

Abstract:

This paper deals with a high-order accurate Runge Kutta Discontinuous Galerkin (RKDG) method for the numerical solution of the wave equation, which is one of the simple case of a linear hyperbolic partial differential equation. Nodal DG method is used for a finite element space discretization in 'x' by discontinuous approximations. This method combines mainly two key ideas which are based on the finite volume and finite element methods. The physics of wave propagation being accounted for by means of Riemann problems and accuracy is obtained by means of high-order polynomial approximations within the elements. High order accurate Low Storage Explicit Runge Kutta (LSERK) method is used for temporal discretization in 't' that allows the method to be nonlinearly stable regardless of its accuracy. The resulting RKDG methods are stable and high-order accurate. The L1 ,L2 and L∞ error norm analysis shows that the scheme is highly accurate and effective. Hence, the method is well suited to achieve high order accurate solution for the scalar wave equation and other hyperbolic equations.

Keywords: Nodal Discontinuous Galerkin Method, RKDG, Scalar Wave Equation, LSERK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2466
78 FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes

Authors: M. K. Bhuyan

Abstract:

The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.

Keywords: Hand gesture, MPEG-4, Hausdorff distance, finite state machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025