Search results for: input output linearization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3601

Search results for: input output linearization

2431 An Advanced Method of Minimizing Unforeseen Disruptions within a Manufacturing System: A Case Study of Amico, South Africa

Authors: Max Moleke

Abstract:

Manufacturing industries are faced with different types of problems. One of the most important role of controlling and monitoring a production process is to actually determine how to deal with unforeseen disruption when they arise. A majority of manufacturing tern to spend huge amount of money in order to meet up with their customers requirements and demand but due to instabilities within the manufacturing process, this objectives and goals are difficult to be achieved. In this research, we have developed a feedback control system that can minimize instability within the manufacturing system in order to boost the system output and productivity.

Keywords: disruption, scheduling, manufacturing, instability

Procedia PDF Downloads 313
2430 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 158
2429 Battery/Supercapacitor Emulator for Chargers Functionality Testing

Authors: S. Farag, A. Kuperman

Abstract:

In this paper, design of solid-state battery/super capacitor emulator based on dc-dc boost converter is described. The emulator mimics charging behavior of any storage device based on a predefined behavior set by the user. The device is operated by a two-level control structure: high-level emulating controller and low-level input voltage controller. Simulation and experimental results are shown to demonstrate the emulator operation.

Keywords: battery, charger, energy, storage, super capacitor

Procedia PDF Downloads 386
2428 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University

Authors: Saadia Elamin

Abstract:

The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.

Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools

Procedia PDF Downloads 453
2427 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 142
2426 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 5
2425 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 299
2424 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 29
2423 Design and Simulation Interface Circuit for Piezoresistive Accelerometers with Offset Cancellation Ability

Authors: Mohsen Bagheri, Ahmad Afifi

Abstract:

This paper presents a new method for read out of the piezoresistive accelerometer sensors. The circuit works based on instrumentation amplifier and it is useful for reducing offset in Wheatstone bridge. The obtained gain is 645 with 1 μv/°c equivalent drift and 1.58 mw power consumption. A Schmitt trigger and multiplexer circuit control output node. A high speed counter is designed in this work. The proposed circuit is designed and simulated in 0.18 μm CMOS technology with 1.8 v power supply.

Keywords: piezoresistive accelerometer, zero offset, Schmitt trigger, bidirectional reversible counter

Procedia PDF Downloads 290
2422 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 114
2421 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 416
2420 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model

Authors: Jiachen Wang, Dongxu Ji

Abstract:

Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.

Keywords: geothermal power generation, optimization, energy model, thermodynamics

Procedia PDF Downloads 54
2419 Calculation of the Supersonic Air Intake with the Optimization of the Shock Wave System

Authors: Elena Vinogradova, Aleksei Pleshakov, Aleksei Yakovlev

Abstract:

During the flight of a supersonic aircraft under various conditions (altitude, Mach, etc.), it becomes necessary to coordinate the operating modes of the air intake and engine. On the supersonic aircraft, it’s been done by changing various control factors (the angle of rotation of the wedge panels and etc.). This paper investigates the possibility of using modern optimization methods to determine the optimal position of the supersonic air intake wedge panels in order to maximize the total pressure recovery coefficient. Modern software allows us to conduct auto-optimization, which determines the optimal position of the control elements of the investigated product to achieve its maximum efficiency. In this work, the flow in the supersonic aircraft inlet has investigated and optimized the operation of the flaps of the supersonic inlet in an aircraft in a 2-D setting. This work has done using ANSYS CFX software. The supersonic aircraft inlet is a flat adjustable external compression inlet. The braking surface is made in the form of a three-stage wedge. The IOSO NM software package was chosen for optimization. Change in the position of the panels of the input device is carried out by changing the angle between the first and second steps of the three-stage wedge. The position of the rest of the panels is changed automatically. Within the framework of the presented work, the position of the moving air intake panel was optimized under fixed flight conditions of the aircraft under a certain engine operating mode. As a result of the numerical modeling, the distribution of total pressure losses was obtained for various cases of the engine operation, depending on the incoming flow velocity and the flight altitude of the aircraft. The results make it possible to obtain the maximum total pressure recovery coefficient under given conditions. Also, the initial geometry was set with a certain angle between the first and second wedge panels. Having performed all the calculations, as well as the subsequent optimization of the aircraft input device, it can be concluded that the initial angle was set sufficiently close to the optimal angle.

Keywords: optimal angle, optimization, supersonic air intake, total pressure recovery coefficient

Procedia PDF Downloads 222
2418 A Research on Determining the Viability of a Job Board Website for Refugees in Kenya

Authors: Prince Mugoya, Collins Oduor Ondiek, Patrick Kanyi Wamuyu

Abstract:

Refugee Job Board Website is a web-based application that provides a platform for organizations to post jobs specifically for refugees. Organizations upload job opportunities and refugees can view them on the website. The website also allows refugees to input their skills and qualifications. The methodology used to develop this system is a waterfall (traditional) methodology. Software development tools include Brackets which will be used to code the website and PhpMyAdmin to store all the data in a database.

Keywords: information technology, refugee, skills, utilization, economy, jobs

Procedia PDF Downloads 146
2417 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 121
2416 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays

Authors: E. Velasco

Abstract:

Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.

Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics

Procedia PDF Downloads 66
2415 An Approach to Wind Turbine Modeling for Increasing Its Efficiency

Authors: Rishikesh Dingari, Sai Kiran Dornala

Abstract:

In this paper, a simple method of achieving maximum power by mechanical energy transmission device (METD) with integration to induction generator is proposed. METD functioning is explained and dynamic response of system to step input is plotted. Induction generator is being operated at self-excited mode with excitation capacitor at stator. Voltage and current are observed when linked to METD.

Keywords: mechanical energy transmitting device(METD), self-excited induction generator, wind turbine, hydraulic actuators

Procedia PDF Downloads 332
2414 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 175
2413 Effectiveness of the Model in the Development of Teaching Materials for Malay Language in Primary Schools in Singapore

Authors: Salha Mohamed Hussain

Abstract:

As part of the review on the Malay Language curriculum and pedagogy in Singapore conducted in 2010, some recommendations were made to nurture active learners who are able to use the Malay Language efficiently in their daily lives. In response to the review, a new Malay Language teaching and learning package for primary school, called CEKAP (Cungkil – Elicit; Eksplorasi – Exploration; Komunikasi – Communication; Aplikasi – Application; Penilaian – Assessment), was developed from 2012 and implemented for Primary 1 in all primary schools from 2015. Resources developed in this package include the text book, activity book, teacher’s guide, big books, small readers, picture cards, flash cards, a game kit and Information and Communication Technology (ICT) resources. The development of the CEKAP package is continuous until 2020. This paper will look at a model incorporated in the development of the teaching materials in the new Malay Language Curriculum for Primary Schools and the rationale for each phase of development to ensure that the resources meet the needs of every pupil in the teaching and learning of Malay Language in the primary schools. This paper will also focus on the preliminary findings of the effectiveness of the model based on the feedback given by members of the working and steering committees. These members are academicians and educators who were appointed by the Ministry of Education to provide professional input on the soundness of pedagogical approach proposed in the revised syllabus and to make recommendations on the content of the new instructional materials. Quantitative data is derived from the interviews held with these members to gather their input on the model. Preliminary findings showed that the members provided positive feedback on the model and that the comprehensive process has helped to develop good and effective instructional materials for the schools. Some recommendations were also gathered from the interview sessions. This research hopes to provide useful information to those involved in the planning of materials development for teaching and learning.

Keywords: Malay language, materials development, model, primary school

Procedia PDF Downloads 98
2412 Design of a Novel Fractal Multiband Planar Antenna with a CPW-Feed

Authors: T. Benyetho, L. El Abdellaoui, J. Terhzaz, H. Bennis, N. Ababssi, A. Tajmouati, A. Tribak, M. Latrach

Abstract:

This work presents a new planar multiband antenna based on fractal geometry. This structure is optimized and validated into simulation by using CST-MW Studio. To feed this antenna we have used a CPW line which makes it easy to be incorporated with integrated circuits. The simulation results presents a good matching input impedance and radiation pattern in the GSM band at 900 MHz and ISM band at 2.4 GHz. The final structure is a dual band fractal antenna with 70 x 70 mm² as a total area by using an FR4 substrate.

Keywords: Antenna, CPW, fractal, GSM, multiband

Procedia PDF Downloads 373
2411 2106 kA/cm² Peak Tunneling Current Density in GaN-Based Resonant Tunneling Diode with an Intrinsic Oscillation Frequency of ~260GHz at Room Temperature

Authors: Fang Liu, JunShuai Xue, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun

Abstract:

Terahertz spectra is in great demand since last two decades for many photonic and electronic applications. III-Nitride resonant tunneling diode is one of the promising candidates for portable and compact THz sources. Room temperature microwave oscillator based on GaN/AlN resonant tunneling diode was reported in this work. The devices, grown by plasma-assisted molecular-beam epitaxy on free-standing c-plane GaN substrates, exhibit highly repeatable and robust negative differential resistance (NDR) characteristics at room temperature. To improve the interface quality at the active region in RTD, indium surfactant assisted growth is adopted to enhance the surface mobility of metal atoms on growing film front. Thanks to the lowered valley current associated with the suppression of threading dislocation scattering on low dislocation GaN substrate, a positive peak current density of record-high 2.1 MA/cm2 in conjunction with a peak-to-valley current ratio (PVCR) of 1.2 are obtained, which is the best results reported in nitride-based RTDs up to now considering the peak current density and PVCR values simultaneously. When biased within the NDR region, microwave oscillations are measured with a fundamental frequency of 0.31 GHz, yielding an output power of 5.37 µW. Impedance mismatch results in the limited output power and oscillation frequency described above. The actual measured intrinsic capacitance is only 30fF. Using a small-signal equivalent circuit model, the maximum intrinsic frequency of oscillation for these diodes is estimated to be ~260GHz. This work demonstrates a microwave oscillator based on resonant tunneling effect, which can meet the demands of terahertz spectral devices, more importantly providing guidance for the fabrication of the complex nitride terahertz and quantum effect devices.

Keywords: GaN resonant tunneling diode, peak current density, microwave oscillation, intrinsic capacitance

Procedia PDF Downloads 117
2410 Radio Frequency Energy Harvesting Friendly Self-Clocked Digital Low Drop-Out for System-On-Chip Internet of Things

Authors: Christos Konstantopoulos, Thomas Ussmueller

Abstract:

Digital low drop-out regulators, in contrast to analog counterparts, provide an architecture of sub-1 V regulation with low power consumption, high power efficiency, and system integration. Towards an optimized integration in the ultra-low-power system-on-chip Internet of Things architecture that is operated through a radio frequency energy harvesting scheme, the D-LDO regulator should constitute the main regulator that operates the master-clock and rest loads of the SoC. In this context, we present a D-LDO with linear search coarse regulation and asynchronous fine regulation, which incorporates an in-regulator clock generation unit that provides an autonomous, self-start-up, and power-efficient D-LDO design. In contrast to contemporary D-LDO designs that employ ring-oscillator architecture which start-up time is dependent on the frequency, this work presents a fast start-up burst oscillator based on a high-gain stage with wake-up time independent of coarse regulation frequency. The design is implemented in a 55-nm Global Foundries CMOS process. With the purpose to validate the self-start-up capability of the presented D-LDO in the presence of ultra-low input power, an on-chip test-bench with an RF rectifier is implemented as well, which provides the RF to DC operation and feeds the D-LDO. Power efficiency and load regulation curves of the D-LDO are presented as extracted from the RF to regulated DC operation. The D-LDO regulator presents 83.6 % power efficiency during the RF to DC operation with a 3.65 uA load current and voltage regulator referred input power of -27 dBm. It succeeds 486 nA maximum quiescent current with CL 75 pF, the maximum current efficiency of 99.2%, and 1.16x power efficiency improvement compared to analog voltage regulator counterpart oriented to SoC IoT loads. Complementary, the transient performance of the D-LDO is evaluated under the transient droop test, and the achieved figure-of-merit is compared with state-of-art implementations.

Keywords: D-LDO, Internet of Things, RF energy harvesting, voltage regulators

Procedia PDF Downloads 124
2409 Interlanguage Acquisition of a Postposition ‘e’ in Korean: Analysis of the Korean Novice Learners’ Output

Authors: Eunjung Lee

Abstract:

This study aims to analyze the sentences generated by the beginners who learn ‘e,’ a postposition in Korean and to find out the regularity of learners’ interlanguage upon investigating the usages of ‘e’ that appears by meanings and functions in their interlanguage, and conditions that ‘e’ is used. This study was conducted with mainly two assumptions; first, the learner’s language has the specific type of interlanguage; and second, there is the regularity of interlanguage when students produce ‘e’ under the specific conditions. Learners’ output has various values and can be used as the useful data to understand interlanguage. Therefore, all the sentences containing a postposition ‘e’ by English speaking learners were searched in ‘Learners’ corpus sharing center in The National Institute of Korean Language’ in Korea, and the data were collected upon limiting the levels of learners with Level 1 and 2. 789 sentences that were used with ‘e’ were selected as the final subjects of the analysis. First, to understand the environmental characteristics to be used with a postposition, ‘e’ after summarizing 13 meaning and functions of ‘e’ appeared in three books of Korean dictionary that summarized the Korean grammar, 1) meaning function of ‘e’ that were used in each sentence was classified; 2) the nouns that were combined with ‘e,’ keywords of the sentences, and the characteristics of modifiers, linkers, and predicates appeared in front of ‘e’ were analyzed; 3) the regularity by the novice learners’ meaning and functions were reviewed; and 4) the differences of the regularity by level 1 and 2 learners’ meaning and functions were found. Upon the study results, the novice learners showed 1) they used the nouns related to ‘time(시간), before(전), after(후), next(다음), the next(그다음), then(때), day of the week(요일), and season(계절)’ mainly in front of ‘e’ when they used ‘e’ as the meaning function of time; 2) they used mainly the verbs of ‘go(가다),’ ‘come(오다),’ and ‘go round(다니다)’ as the predicate to match with ‘e’ that was the meaning function of direction and destination; and 3) they used mainly the nouns related to ‘locations or countries’ in front of ‘e,’ a meaning function postposition of ‘place,’ used mainly the verbs ‘be(있다), not be(없다), live(살다), be many(많다)’ after ‘e,’ and ‘i(이) or ka(가)’ was combined mainly in the subject words in case of ‘be(있다), not be(없다)’ or ‘be many(많다),’ and ‘eun(은) or nun(는)’ was combined mainly in the subject words in front of ‘live at’ In addition, 4) they used ‘e’ which indicates ‘cause or reason’ in the form of ‘because( 때문에),’ and 5) used ‘e’ of the subjects as the predicates to match with the predicates such as ‘treat(대하다), like(들다), and catch(걸리다).’ From these results, ‘e’ usage patterns of the Korean novice learners demonstrated very differently by the meaning functions and the learners’ interlanguage regularity could be deducted. However, little difference was found in interlanguage regularity between level 1 and 2. This study has the meaning to try to understand the interlanguage system and regularity in the learners’ acquisition process of postposition ‘e’ and this can be utilized to lessen their errors.

Keywords: interlanguage, interlagnage anaylsis, postposition ‘e’, Korean acquisition

Procedia PDF Downloads 112
2408 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 561
2407 Adequate Dietary Intake to Improve Outcome of Urine: Urea Nitrogen with Balance Nitrogen and Total Lymphocyte Count

Authors: Mardiana Madjid, Nurpudji Astuti Taslim, Suryani As'ad, Haerani Rasyid, Agussalim Bukhari

Abstract:

The high level of Urine Urea Nitrogen (UUN) indicates hypercatabolism occurs in hospitalized patients. High levels of Total Lymphocyte Count (TLC) indicates the immune system condition, adequate wound healing, and limit complication. Adequate dietary intake affects to decrease of hypercatabolism status in treated patient’s hospitals. Nitrogen Balance (NB) is simply the difference between nitrogen (N₂) intake and output. If more N₂ intake than output, then positive NB or anabolic will occur. This study aims to evaluate the effect of dietary intake in influencing balance nitrogen and total lymphocyte count. Method: A total of 43 patients admitted to a Wahidin Sudirohusodo Hospital between 2018 and 2019 for 10 days' treats are included. The inclusion criteria were patients who were treated for 10 days and receives food from the hospital orally. Patients did not experience gastrointestinal disorders such as vomiting and diarrhea and experience impair kidney function and liver function and expressed approval to participate in this study. During hospitalization, food intake, UUN, albumin serum, balance nitrogen, and TLC was assessed twice on day 1 and day 10. There is no Physician Clinical Nutritional intervention to correct food intake. UUN is 24 hours of urine collected on the second day after admission and the tenth day. Statistical analysis uses SPSS 24 with observational cohort methods. Result: The Forty-three participants completed the follow-up (27 men and 18 women). The age of fewer than 4 years is 22 people, 45 to 60 years is 16 people, and over 60 years is 4 people. The result of the study on day 1 obtained SGA score A, SGA score B, SGA score C are 8, 32, 3 until day 10 are 8, 31, 4, respectively. According to 24h dietary recalls, the energy intake during observation was from 522.5 ± 400.4 to 1011.9 ± 545.1 kcal/day P < 0.05, protein intake from 20.07 ± 17.2 to 40.3 ± 27.3 g/day P < 0.05, carbohydrates from 92.5 ± 71.6 to 184.8 ± 87.4 g/day, and fat from 5.5 ± 3.86 to 13.9 ± 13.9 g/day. The UUN during the observation was from 6.6 ± 7.3 to 5.5 ± 3.9 g/day, TLC decreased from 1622.9 ± 897.2 to 1319.9 ± 636.3/mm³ value target 1800/mm³, albumin serum from 3.07 ± 0.76 to 2.9 ± 0.57 g/day, and BN from -7.5 ± 7.2 to -3.1 ± 4.86. Conclusion: The high level of UUN needs to correct adequate dietary intake to improve NB and TLC status on hospitalized patients.

Keywords: adequate dietary intake, balance nitrogen, total lymphocyte count, urine urea nitrogen

Procedia PDF Downloads 102
2406 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 268
2405 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 87
2404 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 133
2403 First Order Filter Based Current-Mode Sinusoidal Oscillators Using Current Differencing Transconductance Amplifiers (CDTAs)

Authors: S. Summart, C. Saetiaw, T. Thosdeekoraphat, C. Thongsopa

Abstract:

This article presents new current-mode oscillator circuits using CDTAs which is designed from block diagram. The proposed circuits consist of two CDTAs and two grounded capacitors. The condition of oscillation and the frequency of oscillation can be adjusted by electronic method. The circuits have high output impedance and use only grounded capacitors without any external resistor which is very appropriate to future development into an integrated circuit. The results of PSPICE simulation program are corresponding to the theoretical analysis.

Keywords: current-mode, quadrature oscillator, block diagram, CDTA

Procedia PDF Downloads 444
2402 Improving the Efficiency of Pelton Wheel and Cross-Flow Micro Hydro Power Plants

Authors: Loice K. Gudukeya, Charles Mbohwa

Abstract:

The research investigates hydropower plant efficiency with a view to improving the power output while keeping the overall project cost per kilowatt produced within an acceptable range. It reviews the commonly used Pelton and Cross-flow turbines which are employed in the region for micro-hydro power plants. Turbine parameters such as surface texture, material used and fabrication processes are dealt with the intention of increasing the efficiency by 20 to 25 percent for the micro hydro-power plants.

Keywords: hydro, power plant, efficiency, manufacture

Procedia PDF Downloads 412