Search results for: real output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7112

Search results for: real output

5372 Monitoring of Spectrum Usage and Signal Identification Using Cognitive Radio

Authors: O. S. Omorogiuwa, E. J. Omozusi

Abstract:

The monitoring of spectrum usage and signal identification, using cognitive radio, is done to identify frequencies that are vacant for reuse. It has been established that ‘internet of things’ device uses secondary frequency which is free, thereby facing the challenge of interference from other users, where some primary frequencies are not being utilised. The design was done by analysing a specific frequency spectrum, checking if all the frequency stations that range from 87.5-108 MHz are presently being used in Benin City, Edo State, Nigeria. From the results, it was noticed that by using Software Defined Radio/Simulink, we were able to identify vacant frequencies in the range of frequency under consideration. Also, we were able to use the significance of energy detection threshold to reuse this vacant frequency spectrum, when the cognitive radio displays a zero output (that is decision H0), meaning that the channel is unoccupied. Hence, the analysis was able to find the spectrum hole and identify how it can be reused.

Keywords: spectrum, interference, telecommunication, cognitive radio, frequency

Procedia PDF Downloads 224
5371 Effects of Sprint Training on Athletic Performance Related Physiological, Cardiovascular, and Neuromuscular Parameters

Authors: Asim Cengiz, Dede Basturk, Hakan Ozalp

Abstract:

Practicing recurring resistance workout such as may cause changes in human muscle. These changes may be because combination if several factors determining physical fitness. Thus, it is important to identify these changes. Several studies were reviewed to investigate these changes. As a result, the changes included positive modifications in amplified citrate synthase (CS) maximal activity, increased capacity for pyruvate oxidation, improvement on molecular signaling on human performance, amplified resting muscle glycogen and whole GLUT4 protein content, better health outcomes such as enhancement in cardiorespiratory fitness. Sprint training also have numerous long long-term changes inhuman body such as better enzyme action, changes in muscle fiber and oxidative ability. This is important because SV is the critical factor influencing maximal cardiac output and therefore oxygen delivery and maximal aerobic power.

Keywords: sprint, training, performance, exercise

Procedia PDF Downloads 303
5370 Compact Low-Voltage Biomedical Instrumentation Amplifiers

Authors: Phanumas Khumsat, Chalermchai Janmane

Abstract:

Low-voltage instrumentation amplifier has been proposed for 3-lead electrocardiogram measurement system. The circuit’s interference rejection technique is based upon common-mode feed-forwarding where common-mode currents have cancelled each other at the output nodes. The common-mode current for cancellation is generated by means of common-mode sensing and emitter or source followers with resistors employing only one transistor. Simultaneously this particular transistor also provides common-mode feedback to the patient’s right/left leg to further reduce interference entering the amplifier. The proposed designs have been verified with simulations in 0.18-µm CMOS process operating under 1.0-V supply with CMRR greater than 80dB. Moreover ECG signals have experimentally recorded with the proposed instrumentation amplifiers implemented from discrete BJT (BC547, BC558) and MOSFET (ALD1106, ALD1107) transistors working with 1.5-V supply.

Keywords: electrocardiogram, common-mode feedback, common-mode feedforward, communication engineering

Procedia PDF Downloads 384
5369 Bank, Stock Market Efficiency and Economic Growth: Lessons for ASEAN-5

Authors: Tan Swee Liang

Abstract:

This paper estimates bank and stock market efficiency associations with real per capita GDP growth by examining panel-data across three different regions using Panel-Corrected Standard Errors (PCSE) regression developed by Beck and Katz (1995). Data from five economies in ASEAN (Singapore, Malaysia, Thailand, Philippines, and Indonesia), five economies in Asia (Japan, China, Hong Kong SAR, South Korea, and India) and seven economies in OECD (Australia, Canada, Denmark, Norway, Sweden, United Kingdom U.K., and United States U.S.), between 1990 and 2017 are used. Empirical findings suggest one, for Asia-5 high bank net interest margin means greater bank profitability, hence spurring economic growth. Two, for OECD-7 low bank overhead costs (as a share of total assets) may reflect weak competition and weak investment in providing superior banking services, hence dampening economic growth. Three, stock market turnover ratio has negative association with OECD-7 economic growth, but a positive association with Asia-5, which suggest the relationship between liquidity and growth is ambiguous. Lastly, for ASEAN-5 high bank overhead costs (as a share of total assets) may suggest expenses have not been channelled efficiently to income generating activities. One practical implication of the findings is that policy makers should take necessary measures toward financial liberalisation policies that boost growth through the efficiency channel, so that funds are efficiently allocated through the financial system between financial and real sectors.

Keywords: financial development, banking system, capital markets, economic growth

Procedia PDF Downloads 139
5368 Piezoelectric and Dielectric Properties of Poly(Vinylideneflouride-Hexafluoropropylene)/ZnO Nanocomposites

Authors: P. Hemalatha, Deepalekshmi Ponnamma, Mariam Al Ali Al-Maadeed

Abstract:

The Poly(vinylideneflouride-hexafluoropropylene) (PVDF-HFP)/ zinc oxide (ZnO) nanocomposites films were successfully prepared by mixing the fine ZnO particles into PVDF-HFP solution followed by film casting and sandwich techniques. Zinc oxide nanoparticles were synthesized by hydrothermal method. Fourier transform infrared (FT-IR) spectroscopy, X-ray diffraction (XRD) and scanning electron microscopy (SEM) were used to characterize the structure and properties of the obtained nanocomposites. The dielectric properties of the PVDF-HFP/ZnO nanocomposites were analyzed in detail. In comparison with pure PVDF-HFP, the dielectric constant of the nanocomposite (1wt% ZnO) was significantly improved. The piezoelectric co-efficients of the nanocomposites films were measured. Experimental results revealed the influence of filler on the properties of PVDF-HFP and enhancement in the output performance and dielectric properties reflects the ability for energy storage capabilities.

Keywords: dielectric constant, hydrothermal, nanoflowers, organic compounds

Procedia PDF Downloads 286
5367 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves

Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann

Abstract:

Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.

Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves

Procedia PDF Downloads 46
5366 Exergy Analysis of a Green Dimethyl Ether Production Plant

Authors: Marcello De Falco, Gianluca Natrella, Mauro Capocelli

Abstract:

CO₂ capture and utilization (CCU) is a promising approach to reduce GHG(greenhouse gas) emissions. Many technologies in this field are recently attracting attention. However, since CO₂ is a very stable compound, its utilization as a reagent is energetic intensive. As a consequence, it is unclear whether CCU processes allow for a net reduction of environmental impacts from a life cycle perspective and whether these solutions are sustainable. Among the tools to apply for the quantification of the real environmental benefits of CCU technologies, exergy analysis is the most rigorous from a scientific point of view. The exergy of a system is the maximum obtainable work during a process that brings the system into equilibrium with its reference environment through a series of reversible processes in which the system can only interact with such an environment. In other words, exergy is an “opportunity for doing work” and, in real processes, it is destroyed by entropy generation. The exergy-based analysis is useful to evaluate the thermodynamic inefficiencies of processes, to understand and locate the main consumption of fuels or primary energy, to provide an instrument for comparison among different process configurations and to detect solutions to reduce the energy penalties of a process. In this work, the exergy analysis of a process for the production of Dimethyl Ether (DME) from green hydrogen generated through an electrolysis unit and pure CO₂ captured from flue gas is performed. The model simulates the behavior of all units composing the plant (electrolyzer, carbon capture section, DME synthesis reactor, purification step), with the scope to quantify the performance indices based on the II Law of Thermodynamics and to identify the entropy generation points. Then, a plant optimization strategy is proposed to maximize the exergy efficiency.

Keywords: green DME production, exergy analysis, energy penalties, exergy efficiency

Procedia PDF Downloads 257
5365 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 361
5364 Low Cost Inertial Sensors Modeling Using Allan Variance

Authors: A. A. Hussen, I. N. Jleta

Abstract:

Micro-electromechanical system (MEMS) accelerometers and gyroscopes are suitable for the inertial navigation system (INS) of many applications due to the low price, small dimensions and light weight. The main disadvantage in a comparison with classic sensors is a worse long term stability. The estimation accuracy is mostly affected by the time-dependent growth of inertial sensor errors, especially the stochastic errors. In order to eliminate negative effect of these random errors, they must be accurately modeled. Where the key is the successful implementation that depends on how well the noise statistics of the inertial sensors is selected. In this paper, the Allan variance technique will be used in modeling the stochastic errors of the inertial sensors. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data.

Keywords: Allan variance, accelerometer, gyroscope, stochastic errors

Procedia PDF Downloads 442
5363 Consequences of Transformation of Modern Monetary Policy during the Global Financial Crisis

Authors: Aleksandra Szunke

Abstract:

Monetary policy is an important pillar of the economy, directly affecting on the condition of banking sector. Depending on the strategy may both support functioning of banking institutions, as well as limit their excessively risky activities. The literature studies indicate a large number of publications, which include characteristics of initiatives, implemented by central banks during the global financial crisis and the potential effects of the use of non-standard monetary policy instruments. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. Even before the escalation of instability, Bernanke, Reinhart, and Sack (2004) analyzed the effectiveness of various unconventional monetary tools in lowering long-term interest rates in the United States and Japan. The obtained results largely confirmed the effectiveness of the zero-interest-rate policy and Quantitative Easing (QE) in achieving the goal of reducing long-term interest rates. Japan, considered as the precursor of QE policy, also conducted research about the consequences of non-standard instruments, implemented to restore financial stability of the country. Although the literature about the effectiveness of Quantitative Easing in Japan is extensive, it does not uniquely specify whether it brought permanent effects. The main aim of the study is to identify the implications of non-standard monetary policy, implemented by selected central banks (the Federal Reserve System, Bank of England and European Central Bank), paying particular attention to the consequences into three areas: the size of money supply, financial markets, and the real economy.

Keywords: consequences of modern monetary policy, quantitative easing policy, banking sector instability, global financial crisis

Procedia PDF Downloads 478
5362 The Underestimation of Cultural Risk in the Execution of Megaprojects

Authors: Alan Walsh, Peter Walker, Michael Ellis

Abstract:

There is a real danger that both practitioners and researchers considering risks associated with megaprojects ignore or underestimate the impacts of cultural risk. The paper investigates the potential impacts of a failure to achieve cultural unity between the principal actors executing a megaproject. The principle relationships include the relationships between the principle Contractors and the project stakeholders or the project stakeholders and their principle advisors, Western Consultants. This study confirms that cultural dissonance between these parties can delay or disrupt the megaproject execution and examines why cultural issues should be prioritized as a significant risk factor in megaproject delivery. This paper addresses the practical impacts and potential mitigation measures, which may reduce cultural dissonance for a megaproject's delivery. This information is retrieved from on-going case studies in live infrastructure megaprojects in Europe and the Middle East's GCC states, from Western Consultants' perspective. The collaborating researchers each have at least 30 years of construction experience and are engaged in architecture, project management and contracts management, dealing with megaprojects in Europe or the GCC. After examining the cultural interfaces they have observed during the execution of megaprojects, they conclude that globally, culture significantly influences their efficient delivery. The study finds that cultural risk is ever-present, where different nationalities co-manage megaprojects and that cultural conflict poses a real threat to the timely delivery of megaprojects. The study indicates that the higher the cultural distance between the principal actors, the more pronounced the risk, with the risk of cultural dissonance more prominent in GCC megaprojects. The findings support a more culturally aware and cohesive team approach and recommend cross-cultural training to mitigate the effects of cultural disparity.

Keywords: cultural risk underestimation, cultural distance, megaproject characteristics, megaproject execution

Procedia PDF Downloads 106
5361 Branding a Powerful Catalyst for Rural Economic Development

Authors: Mojtaba Borhani

Abstract:

By employing the unique characteristics of a region, its economy, climate, geography, and culture, rural communities can create distinctive products. This approach not only boosts economic opportunities but also promotes sustainable growth and preserves cultural heritage. A strategic focus on branding and intellectual property (IP) is essential. By developing strong brands, rural areas can differentiate their products, increase their market value, and build consumer loyalty. Moreover, IP protection safeguards the creative and innovative output of rural communities, incentivizing further development. Rural branding can serve as a cornerstone for community empowerment. It can help to prevent rural exodus by providing economic incentives and a strong sense of place. Additionally, by protecting traditional knowledge and cultural expressions, branding contributes to the long-term sustainability of rural livelihoods.

Keywords: intellectual property, regional branding, sustainable development, rural economy

Procedia PDF Downloads 25
5360 Adaptive Filtering in Subbands for Supervised Source Separation

Authors: Bruna Luisa Ramos Prado Vasques, Mariane Rembold Petraglia, Antonio Petraglia

Abstract:

This paper investigates MIMO (Multiple-Input Multiple-Output) adaptive filtering techniques for the application of supervised source separation in the context of convolutive mixtures. From the observation that there is correlation among the signals of the different mixtures, an improvement in the NSAF (Normalized Subband Adaptive Filter) algorithm is proposed in order to accelerate its convergence rate. Simulation results with mixtures of speech signals in reverberant environments show the superior performance of the proposed algorithm with respect to the performances of the NLMS (Normalized Least-Mean-Square) and conventional NSAF, considering both the convergence speed and SIR (Signal-to-Interference Ratio) after convergence.

Keywords: adaptive filtering, multi-rate processing, normalized subband adaptive filter, source separation

Procedia PDF Downloads 435
5359 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 86
5358 Reinforcement Learning the Born Rule from Photon Detection

Authors: Rodrigo S. Piera, Jailson Sales Ara´ujo, Gabriela B. Lemos, Matthew B. Weiss, John B. DeBrota, Gabriel H. Aguilar, Jacques L. Pienaar

Abstract:

The Born rule was historically viewed as an independent axiom of quantum mechanics until Gleason derived it in 1957 by assuming the Hilbert space structure of quantum measurements [1]. In subsequent decades there have been diverse proposals to derive the Born rule starting from even more basic assumptions [2]. In this work, we demonstrate that a simple reinforcement-learning algorithm, having no pre-programmed assumptions about quantum theory, will nevertheless converge to a behaviour pattern that accords with the Born rule, when tasked with predicting the output of a quantum optical implementation of a symmetric informationally-complete measurement (SIC). Our findings support a hypothesis due to QBism (the subjective Bayesian approach to quantum theory), which states that the Born rule can be thought of as a normative rule for making decisions in a quantum world [3].

Keywords: quantum Bayesianism, quantum theory, quantum information, quantum measurement

Procedia PDF Downloads 109
5357 Discursivity and Creativity: Implementing Pigrum's Multi-Mode Transitional Practices in Upper Division Creative Production Courses

Authors: Michael Filimowicz, Veronika Tzankova

Abstract:

This paper discusses the practical implementation of Derek Pigrum’s multi-mode model of transitional practices in the context of upper division production courses in an interaction design curriculum. The notion of teaching creativity directly was connected to a general notion of “discursivity” by which is meant students’ overall ability to discuss, describe, and engage in dialogue about their creative work. We present a study of how Pigrum’s transitional modes can be mapped onto a variety of course activities, and discuss challenges and outcomes of directly engaging student discursivity in their creative output.

Keywords: teaching creativity, multi-mode transitional practices, discursivity, rich dialogue, art and design education, pedagogy

Procedia PDF Downloads 502
5356 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 197
5355 Solving a Micromouse Maze Using an Ant-Inspired Algorithm

Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira

Abstract:

This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.

Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking

Procedia PDF Downloads 126
5354 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
5353 Mathematical Modeling and Optimization of Burnishing Parameters for 15NiCr6 Steel

Authors: Tarek Litim, Ouahiba Taamallah

Abstract:

The present paper is an investigation of the effect of burnishing on the surface integrity of a component made of 15NiCr6 steel. This work shows a statistical study based on regression, and Taguchi's design has allowed the development of mathematical models to predict the output responses as a function of the technological parameters studied. The response surface methodology (RSM) showed a simultaneous influence of the burnishing parameters and observe the optimal processing parameters. ANOVA analysis of the results resulted in the validation of the prediction model with a determination coefficient R=90.60% and 92.41% for roughness and hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=10kgf, i=3passes, and f=0.074mm/rev, which favours minimum roughness and maximum hardness. The result was validated by the desirability of D= (0.99 and 0.95) for roughness and hardness, respectively.

Keywords: 15NiCr6 steel, burnishing, surface integrity, Taguchi, RSM, ANOVA

Procedia PDF Downloads 191
5352 Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network

Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah

Abstract:

Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.

Keywords: CNN, deep-learning, facial emotion recognition, machine learning

Procedia PDF Downloads 95
5351 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 474
5350 Optimization Design of Single Phase Inverter Connected to the Grid

Authors: Linda Hassaine, Abdelhamid Mraoui, Mohamed Rida Bengourina

Abstract:

In grid-connected photovoltaic systems, significant improvements can be carried out in the design and implementation of inverters: reduction of harmonic distortion, elimination of the DC component injected into the grid and the proposed control. This paper proposes a control strategy based on PWM switching patterns for an inverter for the photovoltaic system connected to the grid in order to control the injected current. The current injected must be sinusoidal with reduced harmonic distortion. An additional filter is designed to reduce high-order harmonics on the output side. This strategy exhibits the advantages: Simplicity, reduction of harmonics, the size of the line filter, reduction of the memory requirements and power calculation for the control.

Keywords: control, inverters, LCL filter, grid-connected photovoltaic system

Procedia PDF Downloads 325
5349 Temperature Distribution Control for Baby Incubator System Using Arduino AT Mega 2560

Authors: W. Widhiada, D. N. K. P. Negara, P. A. Suryawan

Abstract:

The technological advances in the field of health to be very important, especially on the safety of the baby. In this case a lot of premature infants death caused by poorly managed health facilities. Mostly the death of premature baby caused by bacteria since the temperature around the baby is not normal. Related to this, the incubator equipment needs to be important, especially in how to control the temperature in incubator. On/Off controls is used to regulate the temperature distribution in the incubator so that the desired temperature is 36 °C to stay awake and stable. The authors have been observed and analyzed the data to determine the temperature distribution in the incubator using program of MATLAB/Simulink. The output temperature distribution is obtained at 36 °C in 400 seconds using an Arduino AT 2560. This incubator is able to maintain an ambient temperature and maintain the baby's body temperature within normal limits and keep the moisture in the air in accordance with the limit values required in infant incubator.

Keywords: on/off control, distribution temperature, Arduino AT 2560, baby incubator

Procedia PDF Downloads 502
5348 Energy Harvesting with Zinc Oxide Based Nanogenerator: Design and Simulation Using Comsol-4.3 Software

Authors: Akanksha Rohit, Ujjwala Godavarthi, Anshua Mukherjee

Abstract:

Nanotechnology is one of the promising sustainable solutions in the era of miniaturization due to its multidisciplinary nature. The most interesting aspect about nanotechnology is its wide ranging applications from electronics to military and biomedical. It tries to connect individuals more closely to the environment. In this paper, concept of parasitic energy harvesting is used in designing nanogenerators using COMSOL 4.3 software. The output of the nanogenerator is optimized using following constraints: ease of availability of the material, fabrication process and cost of the material. The nanogenerator is optimized using ZnO based nanowires, PMMA as insulator and aluminum and silicon as metal electrodes. The energy harvested from the model can be used to power nanobots, several other biomedical sensors and eventually to replace batteries. Thus, advancements in this field can be very challenging but it is the future of the nano era.

Keywords: zinc oxide, piezoelectric, PMMA, parasitic energy harvesting, renewable energy engineering

Procedia PDF Downloads 364
5347 Entropy Measures on Neutrosophic Soft Sets and Its Application in Multi Attribute Decision Making

Authors: I. Arockiarani

Abstract:

The focus of the paper is to furnish the entropy measure for a neutrosophic set and neutrosophic soft set which is a measure of uncertainty and it permeates discourse and system. Various characterization of entropy measures are derived. Further we exemplify this concept by applying entropy in various real time decision making problems.

Keywords: entropy measure, Hausdorff distance, neutrosophic set, soft set

Procedia PDF Downloads 257
5346 Semiconductor Variable Wavelength Generator of Near-Infrared-to-Terahertz Regions

Authors: Isao Tomita

Abstract:

Power characteristics are obtained for laser beams of near-infrared and terahertz wavelengths when produced by difference-frequency generation with a quasi-phase-matched (QPM) waveguide made of gallium phosphide (GaP). A refractive-index change of the QPM GaP waveguide is included in computations with Sellmeier’s formula for varying input wavelengths, where optical loss is also included. Although the output power decreases with decreasing photon energy as the beam wavelength changes from near-infrared to terahertz wavelengths, the beam generation with such greatly different wavelengths, which is not achievable with an ordinary laser diode without the replacement of semiconductor material with a different bandgap one, can be made with the same semiconductor (GaP) by changing the QPM period, where a way of changing the period is provided.

Keywords: difference-frequency generation, gallium phosphide, quasi-phase-matching, waveguide

Procedia PDF Downloads 116
5345 Simultaneous Removal of Phosphate and Ammonium from Eutrophic Water Using Dolochar Based Media Filter

Authors: Prangya Ranjan Rout, Rajesh Roshan Dash, Puspendu Bhunia

Abstract:

With the aim of enhancing the nutrient (ammonium and phosphate) removal from eutrophic wastewater with reduced cost, a novel media based multistage bio filter with drop aeration facility was developed in this work. The bio filter was packed with a discarded sponge iron industry by product, ‘dolochar’ primarily to remove phosphate via physicochemical approach. In the multi stage bio-filter drop, aeration was achieved by the process of percolation of the gravity-fed wastewater through the filter media and dropping down of wastewater from stage to stage. Ammonium present in wastewater got adsorbed by the filter media and biomass grown on the filter media and subsequently, got converted to nitrate through biological nitrification in the aerobic condition, as realized by drop aeration. The performance of the bio-filter in treating real eutrophic wastewater was monitored for a period of about 2 months. The influent phosphate concentration was in the range of 16-19 mg/L, and ammonium concentration was in the range of 65-78 mg/L. The average nutrient removal efficiency observed during the study period were 95.2% for phosphate and 88.7% for ammonium, with mean final effluent concentration of 0.91, and 8.74 mg/L, respectively. Furthermore, the subsequent release of nutrient from the saturated filter media, after completion of treatment process has been undertaken in this study and thin layer funnel analytical test results reveal the slow nutrient release nature of spent dolochar, thereby, recommending its potential agricultural application. Thus, the bio-filter displays immense prospective for treating real eutrophic wastewater, significantly decreasing the level of nutrients and keeping the effluent nutrient concentrations at par with the permissible limit and more importantly, facilitating the conversion of the waste materials into usable ones.

Keywords: ammonium removal, phosphate removal, multi-stage bio-filter, dolochar

Procedia PDF Downloads 194
5344 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures

Authors: Michał Lidner, Zbigniew SzcześNiak

Abstract:

The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.

Keywords: adiabatic process, air shock wave, explosive, finite volume method

Procedia PDF Downloads 192
5343 iSEA: A Mobile Based Learning Application for History and Culture Knowledge Enhancement for the ASEAN Region

Authors: Maria Visitacion N. Gumabay, Byron Joseph A. Hallar, Annjeannette Alain D. Galang

Abstract:

This study was intended to provide a more efficient and convenient way for mobile users to enhance their knowledge about ASEAN countries. The researchers evaluated the utility of the developed crossword puzzle application and assessed the general usability of its user interface for its intended purpose and audience of users. The descriptive qualitative research method for the research design and the Mobile-D methodology was employed for the development of the software application output. With a generally favorable reception from its users, the researchers concluded that the iSEA Mobile Based Learning Application can be considered ready for general deployment and use. It was also concluded that additional studies can also be done to make a more complete assessment of the knowledge gained by its users before and after using the application.

Keywords: mobile learning, eLearning, crossword, ASEAN, iSEA

Procedia PDF Downloads 313