Search results for: input output linearization
2917 How Technology Import Improve the Enterprise's Innovation Capacity: The Mediating Role of Absorptive Capacity
Authors: Zhan Zheng-Qun, Li Min, Xie Yan
Abstract:
Technology plays a key role in determining productivity and economy development in a country. The process of enterprises’ innovation can be seen as a process of knowledge management including the process of knowledge attainment; acquisition and converting and integrating into new knowledge. This research analyzes the influence factors and mechanism of the independent innovation of high-tech enterprises in the year 1995-2013. The result shows that the technology import has a significant positive effect on the innovation capacity of enterprises. And the absorptive capacity, represented by the research outlay input and research staff input, has a significant positive effect on the innovation capacity of enterprises. Furthermore, the effect of technology import on the independent research capacity of high-tech enterprises is significantly positively affected by their absorptive capacity.Keywords: technology import, innovation capacity, absorptive capacity, high-tech industry
Procedia PDF Downloads 2832916 Concrete Mix Design Using Neural Network
Authors: Rama Shanker, Anil Kumar Sachan
Abstract:
Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.Keywords: aggregate proportions, artificial neural network, concrete grade, concrete mix design
Procedia PDF Downloads 3892915 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits
Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam
Abstract:
In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection
Procedia PDF Downloads 4412914 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2192913 Noise and Thermal Analyses of Memristor-Based Phase Locked Loop Integrated Circuit
Authors: Naheem Olakunle Adesina
Abstract:
The memristor is considered as one of the promising candidates for mamoelectronic engineering and applications. Owing to its high compatibility with CMOS, nanoscale size, and low power consumption, memristor has been employed in the design of commonly used circuits such as phase-locked loop (PLL). In this paper, we designed a memristor-based loop filter (LF) together with other components of PLL. Following this, we evaluated the noise-rejection feature of loop filter by comparing the noise levels of input and output signals of the filter. Our SPICE simulation results showed that memristor behaves like a linear resistor at high frequencies. The result also showed that loop filter blocks the high-frequency components from phase frequency detector so as to provide a stable control voltage to the voltage controlled oscillator (VCO). In addition, we examined the effects of temperature on the performance of the designed phase locked loop circuit. A critical temperature, where there is frequency drift of VCO as a result of variations in control voltage, is identified. In conclusion, the memristor is a suitable choice for nanoelectronic systems owing to a small area, low power consumption, dense nature, high switching speed, and endurance. The proposed memristor-based loop filter, together with other components of the phase locked loop, can be designed using memristive emulator and EDA tools in current CMOS technology and simulated.Keywords: Fast Fourier Transform, hysteresis curve, loop filter, memristor, noise, phase locked loop, voltage controlled oscillator
Procedia PDF Downloads 1862912 Execution of Optimization Algorithm in Cascaded H-Bridge Multilevel Inverter
Authors: M. Suresh Kumar, K. Ramani
Abstract:
This paper proposed the harmonic elimination of Cascaded H-Bridge Multi-Level Inverter by using Selective Harmonic Elimination-Pulse Width Modulation method programmed with Particle Swarm Optimization algorithm. PSO method determine proficiently the required switching angles to eliminate low order harmonics up to the 11th order from the inverter output voltage waveform while keeping the magnitude of the fundamental harmonics at the desired value. Results demonstrate that the proposed method does efficiently eliminate a great number of specific harmonics and the output voltage is resulted in minimum Total Harmonic Distortion. The results shown that the PSO algorithm attain successfully to the global solution faster than other algorithms.Keywords: multi-level inverter, Selective Harmonic Elimination Pulse Width Modulation (SHEPWM), Particle Swarm Optimization (PSO), Total Harmonic Distortion (THD)
Procedia PDF Downloads 6032911 Flow over an Exponentially Stretching Sheet with Hall and Cross-Diffusion Effects
Authors: Srinivasacharya Darbhasayanam, Jagadeeshwar Pashikanti
Abstract:
This paper analyzes the Soret and Dufour effects on mixed convection flow, heat and mass transfer from an exponentially stretching surface in a viscous fluid with Hall Effect. The governing partial differential equations are transformed into ordinary differential equations using similarity transformations. The nonlinear coupled ordinary differential equations are reduced to a system of linear differential equations using the successive linearization method and then solved the resulting linear system using the Chebyshev pseudo spectral method. The numerical results for the velocity components, temperature and concentration are presented graphically. The obtained results are compared with the previously published results, and are found to be in excellent agreement. It is observed from the present analysis that the primary and secondary velocities and concentration are found to be increasing, and temperature is decreasing with the increase in the values of the Soret parameter. An increase in the Dufour parameter increases both the primary and secondary velocities and temperature and decreases the concentration.Keywords: Exponentially stretching sheet, Hall current, Heat and Mass transfer, Soret and Dufour Effects
Procedia PDF Downloads 2132910 Estimation of Chronic Kidney Disease Using Artificial Neural Network
Authors: Ilker Ali Ozkan
Abstract:
In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis
Procedia PDF Downloads 4472909 A Low Phase Noise CMOS LC Oscillator with Tail Current-Shaping
Authors: Amir Mahdavi
Abstract:
In this paper, a circuit topology of voltage-controlled oscillators (VCO) which is suitable for ultra-low-phase noise operations is introduced. To do so, a new low phase noise cross-coupled oscillator by using the general topology of cross-coupled oscillator and adding a differential stage for tail current shaping is designed. In addition, a tail current shaping technique to improve phase noise in differential LC VCOs is presented. The tail current becomes large when the oscillator output voltage arrives at the maximum or minimum value and when the sensitivity of the output phase to the noise is the smallest. Also, the tail current becomes small when the phase noise sensitivity is large. The proposed circuit does not use extra power and extra noisy active devices. Furthermore, this topology occupies small area. Simulation results show the improvement in phase noise by 2.5dB under the same conditions and at the carrier frequency of 1 GHz for GSM applications. The power consumption of the proposed circuit is 2.44 mW and the figure of merit (FOM) with -192.2 dBc/Hz is achieved for the new oscillator.Keywords: LC oscillator, low phase noise, current shaping, diff mode
Procedia PDF Downloads 6002908 Application on Metastable Measurement with Wide Range High Resolution VDL Circuit
Authors: Po-Hui Yang, Jing-Min Chen, Po-Yu Kuo, Chia-Chun Wu
Abstract:
This paper proposed a high resolution Vernier Delay Line (VDL) measurement circuit with coarse and fine detection mechanism, which improved the trade-off problem between high resolution and less delay cells in traditional VDL circuits. And the measuring time of proposed measurement circuit is also under the high resolution requests. At first, the testing range of input signal which proposed high resolution delay line is detected by coarse detection VDL. Moreover, the delayed input signal is transmitted to fine detection VDL for measuring value with better accuracy. This paper is implemented at 0.18μm process, operating frequency is 100 MHz, and the resolution achieved 2.0 ps with only 16-stage delay cells. The test range is 170ps wide, and 17% stages saved compare with traditional single delay line circuit.Keywords: vernier delay line, D-type flip-flop, DFF, metastable phenomenon
Procedia PDF Downloads 5972907 Economic Analysis of Coffee Cultivation in Kodagu District of Karnataka State, India
Authors: P. S. Dhananjaya Swamy, B. Chinnappa, G. B. Ramesh, Naveen P. Kumar
Abstract:
Kodagu district is one of the most densely forested districts in the India as around sixty five per cent of geographical areas under tree cover. Nearly 53 per cent of the flora of Kodagu is endemic. The district is also a hotspot of endemic orchids found mainly in the Thadiandamol. Shade grown, eco-friendly coffee farms are perhaps a selected few places on this planet where nature runs wild. The Kodagu accounts for more than 8.8 per cent of floral diversity of Karnataka state. Estimation of unit cost of cultivation plays a vital role in determining the governmental program their market intervention policies. On an average, planters incurred around Rs. 17041 per acre. The extent of production risk was highest among small category of planters (66 %) compared to other two exhibiting production instability. The result shows that, the coffee productivity in medium plantations was 1051.2 kg per acre as against 758.5 and 789.2 kg in the case of small and large plantations. An annual net return per acre was highest in the case of medium planters (Rs. 26109.3) as against Rs. 20566.7 and Rs. 18572.7 in the case of small and large planters. Cost of production was lowest in the case of small planters (Rs. 18.9 per kg of output) followed by medium planters (Rs. 21.2 per kg of output) and large planters (Rs. 22.5 per kg of output). The productivity of coffee is less whenever it is grown under high shade and native tree cover; it is around 6 quintals per acre when compared with low shade conditions, which is around 8.9 quintals per acre, without a significant difference in the amount invested for growing coffee. Net gain was lower by Rs. 15.5 per kg for the planters growing under high shade and native trees cover when compared with low shade and exotic trees cover.Keywords: coffee, cultivation, economics, Kodagu
Procedia PDF Downloads 1962906 The Development of an Automated Computational Workflow to Prioritize Potential Resistance Variants in HIV Integrase Subtype C
Authors: Keaghan Brown
Abstract:
The prioritization of drug resistance mutations impacting protein folding or protein-drug and protein-DNA interactions within macromolecular systems is critical to the success of treatment regimens. With a continual increase in computational tools to assess these impacts, the need for scalability and reproducibility became an essential component of computational analysis and experimental research. Here it introduce a bioinformatics pipeline that combines several structural analysis tools in a simplified workflow, by optimizing the present computational hardware and software to automatically ease the flow of data transformations. Utilizing preestablished software tools, it was possible to develop a pipeline with a set of pre-defined functions that will automate mutation introduction into the HIV-1 Integrase protein structure, calculate the gain and loss of polar interactions and calculate the change in energy of protein fold. Additionally, an automated molecular dynamics analysis was implemented which reduces the constant need for user input and output management. The resulting pipeline, Automated Mutation Introduction and Analysis (AMIA) is an open source set of scripts designed to introduce and analyse the effects of mutations on the static protein structure as well as the results of the multi-conformational states from molecular dynamic simulations. The workflow allows the user to visualize all outputs in a user friendly manner thereby successfully enabling the prioritization of variant systems for experimental validation.Keywords: automated workflow, variant prioritization, drug resistance, HIV Integrase
Procedia PDF Downloads 772905 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 692904 Design and Thermal Analysis of Power Harvesting System of a Hexagonal Shaped Small Spacecraft
Authors: Mansa Radhakrishnan, Anwar Ali, Muhammad Rizwan Mughal
Abstract:
Many universities around the world are working on modular and low budget architecture of small spacecraft to reduce the development cost of the overall system. This paper focuses on the design of a modular solar power harvesting system for a hexagonal-shaped small satellite. The designed solar power harvesting systems are composed of solar panels and power converter subsystems. The solar panel is composed of solar cells mounted on the external face of the printed circuit board (PCB), while the electronic components of power conversion are mounted on the interior side of the same PCB. The solar panel with dimensions 16.5cm × 99cm is composed of 36 solar cells (each solar cell is 4cm × 7cm) divided into four parallel banks where each bank consists of 9 solar cells. The output voltage of a single solar cell is 2.14V, and the combined output voltage of 9 series connected solar cells is around 19.3V. The output voltage of the solar panel is boosted to the satellite power distribution bus voltage level (28V) by a boost converter working on a constant voltage maximum power point tracking (MPPT) technique. The solar panel module is an eight-layer PCB having embedded coil in 4 internal layers. This coil is used to control the attitude of the spacecraft, which consumes power to generate a magnetic field and rotate the spacecraft. As power converter and distribution subsystem components are mounted on the PCB internal layer, therefore it is mandatory to do thermal analysis in order to ensure that the overall module temperature is within thermal safety limits. The main focus of the overall design is on compactness, miniaturization, and efficiency enhancement.Keywords: small satellites, power subsystem, efficiency, MPPT
Procedia PDF Downloads 742903 GA3C for Anomalous Radiation Source Detection
Authors: Chia-Yi Liu, Bo-Bin Xiao, Wen-Bin Lin, Hsiang-Ning Wu, Liang-Hsun Huang
Abstract:
In order to reduce the risk of radiation damage that personnel may suffer during operations in the radiation environment, the use of automated guided vehicles to assist or replace on-site personnel in the radiation environment has become a key technology and has become an important trend. In this paper, we demonstrate our proof of concept for autonomous self-learning radiation source searcher in an unknown environment without a map. The research uses GPU version of Asynchronous Advantage Actor-Critic network (GA3C) of deep reinforcement learning to search for radiation sources. The searcher network, based on GA3C architecture, has self-directed learned and improved how search the anomalous radiation source by training 1 million episodes under three simulation environments. In each episode of training, the radiation source position, the radiation source intensity, starting position, are all set randomly in one simulation environment. The input for searcher network is the fused data from a 2D laser scanner and a RGB-D camera as well as the value of the radiation detector. The output actions are the linear and angular velocities. The searcher network is trained in a simulation environment to accelerate the learning process. The well-performance searcher network is deployed to the real unmanned vehicle, Dashgo E2, which mounts LIDAR of YDLIDAR G4, RGB-D camera of Intel D455, and radiation detector made by Institute of Nuclear Energy Research. In the field experiment, the unmanned vehicle is enable to search out the radiation source of the 18.5MBq Na-22 by itself and avoid obstacles simultaneously without human interference.Keywords: deep reinforcement learning, GA3C, source searching, source detection
Procedia PDF Downloads 1142902 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity
Procedia PDF Downloads 1522901 Optimization of Organic Rankine Cycle System for Waste Heat Recovery from Excavator
Authors: Young Min Kim, Dong Gil Shin, Assmelash Assefa Negash
Abstract:
This study describes the application of a single loop organic Rankine cycle (ORC) for recovering waste heat from an excavator. In the case of waste heat recovery of the excavator, the heat of hydraulic oil can be used in the ORC system together with the other waste heat sources including the exhaust gas and engine coolant. The performances of four different cases of single loop ORC systems were studied at the main operating condition, and critical design factors are studied to get the maximum power output from the given waste heat sources. The energy and exergy analysis of the cycles are performed concerning the available heat source to determine the best fluid and system configuration. The analysis demonstrates that the ORC in the excavator increases 14% of the net power output at the main operating condition with a simpler system configuration at a lower expander inlet temperature than in a conventional vehicle engine without the heat of the hydraulic oil.Keywords: engine, excavator, hydraulic oil, organic Rankine cycle (ORC), waste heat recovery
Procedia PDF Downloads 3062900 Design of Semi-Automatic Vent and Flash Remover
Authors: Inba Blesso P., Senthil Kumar P.
Abstract:
The main consideration of any tire manufacturing process is wear resistance. One of the factors that cause tire wear is improper removal of vent and flash from the tire surface. The contact point between tyre surface and vent is highly supposed to wear. When the vehicle running at higher speed with heavy load, the tire vent and flash is wearing initially and it makes few of the tire surface material to wear along with it. Hence, provision must be given to efficient removal vent and flash thereby tire wear. Human efforts in trimming of tire vent results in time consuming and inaccurate output. Hence, this lead to the reduction in production rate and profit. Thus, the development of automated system can helps to attain minimum time consumption and provide a possible way to get the profitable production. Semi-automated system that employs Pneumatic actuators and sequencing circuits are focused in this study. By implementing this, one can achieve the accurate results with reduction in time and profitable output.Keywords: tire manufacturing, pneumatic system, vent and flash removal, engineering and technology
Procedia PDF Downloads 3812899 Addressing Food Grain Losses in India: Energy Trade-Offs and Nutrition Synergies
Authors: Matthew F. Gibson, Narasimha D. Rao, Raphael B. Slade, Joana Portugal Pereira, Joeri Rogelj
Abstract:
Globally, India’s population is among the most severely impacted by nutrient deficiency, yet millions of tonnes of food are lost before reaching consumers. Across food groups, grains represent the largest share of daily calories and overall losses by mass in India. If current losses remain unresolved and follow projected population rates, we estimate, by 2030, losses from grains for human consumption could increase by 1.3-1.8 million tonnes (Mt) per year against current levels of ~10 Mt per year. This study quantifies energy input to minimise storage losses across India, responsible for a quarter of grain supply chain losses. In doing so, we identify and explore a Sustainable Development Goal (SDG) triplet between SDG₂, SDG₇, and SDG₁₂ and provide insight for development of joined up agriculture and health policy in the country. Analyzing rice, wheat, maize, bajra, and sorghum, we quantify one route to reduce losses in supply chains, by modelling the energy input to maintain favorable climatic conditions in modern silo storage. We quantify key nutrients (calories, protein, zinc, iron, vitamin A) contained within these losses and calculate roughly how much deficiency in these dietary components could be reduced if grain losses were eliminated. Our modelling indicates, with appropriate uncertainty, maize has the highest energy input intensity for storage, at 110 kWh per tonne of grain (kWh/t), and wheat the lowest (72 kWh/t). This energy trade-off represents 8%-16% of the energy input required in grain production. We estimate if grain losses across the supply chain were saved and targeted to India’s nutritionally deficient population, average protein deficiency could reduce by 46%, calorie by 27%, zinc by 26%, and iron by 11%. This study offers insight for development of Indian agriculture, food, and health policy by first quantifying and then presenting benefits and trade-offs of tackling food grain losses.Keywords: energy, food loss, grain storage, hunger, India, sustainable development goal, SDG
Procedia PDF Downloads 1292898 Information Extraction for Short-Answer Question for the University of the Cordilleras
Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo
Abstract:
Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.Keywords: information extraction, short-answer question, natural language processing, application
Procedia PDF Downloads 4282897 The Influence of Thomson Effect on the Performance of N-Type Skutterudite Thermoelement
Authors: Anbang Liu, Huaqing Xie, Zihua Wu, Xiaoxiao Yu, Yuanyuan Wang
Abstract:
Due to the temperature-dependence and mutual coupling of thermoelectric parameters, the Thomson effect always exists, which is derived from temperature gradients during thermoelectric conversion. The synergistic effect between the Thomson effect and non-equilibrium heat transport of charge carriers leads to local heat absorption or release in thermoelements, thereby affecting its power generation performance and conversion efficiency. This study verified and analyzed the influence and mechanism of the Thomson effect on N-type skutterudite thermoelement through quasi-steady state testing under approximate vacuum conditions. The results indicate the temperature rise/fall of N-type thermoelement at any position is affected by Thomson heat release/absorption. Correspondingly, the Thomson effect also contributes advantageously/disadvantageously to the output power of N-type skutterudite thermoelement when the Thomson coefficients are positive/negative. In this work, the output power can be promoted or decreased maximally by more than 27% due to the presence of Thomson heat when the absolute value of the Thomson coefficient is around 36 μV/℃.Keywords: Thomson effect, heat transport, thermoelectric conversion, numerical simulation
Procedia PDF Downloads 672896 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System
Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong
Abstract:
In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum
Procedia PDF Downloads 1942895 Evaluation the Financial and Social Efficiency of Microfinance Institutions Using Data Envelope Analysis - A Sample Study of Active Microfinance Institutions in India
Authors: Hiba Mezaache
Abstract:
The study aims to assess the financial and social efficiency of microfinance institutions in india for the period 2015-2019 by using two models of economies of scale and choosing the output direction of the data envelope analysis (DEA) method and using the MIX MARKET database. The study concluded that microfinance institutions focus on achieving financial efficiency beyond their focus on achieving social efficiency to ensure their continuity in the market. Convergence in the efficiency ratios that have been achieved, but the optimum ratios have been achieved under the changing economies of scale; Efficiency is affected by the depth of reaching low-income groups, as serving this group raises costs and risks. The importance of lending to women in rural areas and raising their awareness to ensure their financial and social empowerment; Make improvements in operating expenses, asset management, and loan personnel control in order to maximize output.Keywords: microfinance, financial efficiency, social efficiency, mix market, microfinance institutions
Procedia PDF Downloads 1562894 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach
Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac
Abstract:
Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, trend equations
Procedia PDF Downloads 2332893 Design and Implementation of Pseudorandom Number Generator Using Android Sensors
Authors: Mochamad Beta Auditama, Yusuf Kurniawan
Abstract:
A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.Keywords: Android hardware-based sensor, deterministic process, entropy source, random number generation/generators
Procedia PDF Downloads 3742892 Design and Analysis of a Piezoelectric-Based AC Current Measuring Sensor
Authors: Easa Ali Abbasi, Akbar Allahverdizadeh, Reza Jahangiri, Behnam Dadashzadeh
Abstract:
Electrical current measurement is a suitable method for the performance determination of electrical devices. There are two contact and noncontact methods in this measuring process. Contact method has some disadvantages like having direct connection with wire which may endamage the system. Thus, in this paper, a bimorph piezoelectric cantilever beam which has a permanent magnet on its free end is used to measure electrical current in a noncontact way. In mathematical modeling, based on Galerkin method, the governing equation of the cantilever beam is solved, and the equation presenting the relation between applied force and beam’s output voltage is presented. Magnetic force resulting from current carrying wire is considered as the external excitation force of the system. The results are compared with other references in order to demonstrate the accuracy of the mathematical model. Finally, the effects of geometric parameters on the output voltage and natural frequency are presented.Keywords: cantilever beam, electrical current measurement, forced excitation, piezoelectric
Procedia PDF Downloads 2322891 A Low-Power, Low-Noise and High-Gain 58~66 GHz CMOS Receiver Front-End for Short-Range High-Speed Wireless Communications
Authors: Yo-Sheng Lin, Jen-How Lee, Chien-Chin Wang
Abstract:
A 60-GHz receiver front-end using standard 90-nm CMOS technology is reported. The receiver front-end comprises a wideband low-noise amplifier (LNA), and a double-balanced Gilbert cell mixer with a current-reused RF single-to-differential (STD) converter, an LO Marchand balun and a baseband amplifier. The receiver front-end consumes 34.4 mW and achieves LO-RF isolation of 60.7 dB, LO-IF isolation of 45.3 dB and RF-IF isolation of 41.9 dB at RF of 60 GHz and LO of 59.9 GHz. At IF of 0.1 GHz, the receiver front-end achieves maximum conversion gain (CG) of 26.1 dB at RF of 64 GHz and CG of 25.2 dB at RF of 60 GHz. The corresponding 3-dB bandwidth of RF is 7.3 GHz (58.4 GHz to 65.7 GHz). The measured minimum noise figure was 5.6 dB at 64 GHz, one of the best results ever reported for a 60 GHz CMOS receiver front-end. In addition, the measured input 1-dB compression point and input third-order inter-modulation point are -33.1 dBm and -23.3 dBm, respectively, at 60 GHz. These results demonstrate the proposed receiver front-end architecture is very promising for 60 GHz direct-conversion transceiver applications.Keywords: CMOS, 60 GHz, direct-conversion transceiver, LNA, down-conversion mixer, marchand balun, current-reused
Procedia PDF Downloads 4522890 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty
Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih
Abstract:
In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization
Procedia PDF Downloads 1252889 Analysis of Automotive Sensor for Engine Knock System
Authors: Miroslav Gutten, Jozef Jurcik, Daniel Korenciak, Milan Sebok, Matej Kuceraa
Abstract:
This paper deals with the phenomenon of the undesirable detonation combustion in internal combustion engines. A control unit of the engine monitors these detonations using piezoelectric knock sensors. With the control of these sensors the detonations can be objectively measured just outside the car. If this component provides small amplitude of the output voltage it could happen that there would have been in the areas of the engine ignition combustion. The paper deals with the design of a simple device for the detection of this disorder. A construction of the testing device for the knock sensor suitable for diagnostics of knock combustion in internal combustion engines will be presented. The output signal of presented sensor will be described by Bessel functions. Using the first voltage extremes on the characteristics it is possible to create a reference for the evaluation of the polynomial residue. It should be taken into account that the velocity of sound in air is 330 m/s. This sound impinges on the walls of the combustion chamber and is detected by the sensor. The resonant frequency of the clicking of the motor is usually in the range from 5 kHz to 15 kHz. The sensor worked in the field to 37 kHz, which shall be taken into account on an own sensor resonance.Keywords: diagnostics, knock sensor, measurement, testing device
Procedia PDF Downloads 4472888 H∞ Fuzzy Integral Power Control for DFIG Wind Energy System
Authors: N. Chayaopas, W. Assawinchaichote
Abstract:
In order to maximize energy capturing from wind energy, controlling the doubly fed induction generator to have optimal power from the wind, generator speed and output electrical power control in wind energy system have a great importance due to the nonlinear behavior of wind velocities. In this paper purposes the design of a control scheme is developed for power control of wind energy system via H∞ fuzzy integral controller. Firstly, the nonlinear system is represented in term of a TS fuzzy control design via linear matrix inequality approach to find the optimal controller to have an H∞ performance are derived. The proposed control method extract the maximum energy from the wind and overcome the nonlinearity and disturbances problems of wind energy system which give good tracking performance and high efficiency power output of the DFIG.Keywords: doubly fed induction generator, H-infinity fuzzy integral control, linear matrix inequality, wind energy system
Procedia PDF Downloads 347