Search results for: low input farming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2737

Search results for: low input farming

2257 Short Term Distribution Load Forecasting Using Wavelet Transform and Artificial Neural Networks

Authors: S. Neelima, P. S. Subramanyam

Abstract:

The major tool for distribution planning is load forecasting, which is the anticipation of the load in advance. Artificial neural networks have found wide applications in load forecasting to obtain an efficient strategy for planning and management. In this paper, the application of neural networks to study the design of short term load forecasting (STLF) Systems was explored. Our work presents a pragmatic methodology for short term load forecasting (STLF) using proposed two-stage model of wavelet transform (WT) and artificial neural network (ANN). It is a two-stage prediction system which involves wavelet decomposition of input data at the first stage and the decomposed data with another input is trained using a separate neural network to forecast the load. The forecasted load is obtained by reconstruction of the decomposed data. The hybrid model has been trained and validated using load data from Telangana State Electricity Board.

Keywords: electrical distribution systems, wavelet transform (WT), short term load forecasting (STLF), artificial neural network (ANN)

Procedia PDF Downloads 428
2256 A Contactless Capacitive Biosensor for Muscle Activity Measurement

Authors: Charn Loong Ng, Mamun Bin Ibne Reaz

Abstract:

As elderly population grows globally, the percentage of people diagnosed with musculoskeletal disorder (MSD) increase proportionally. Electromyography (EMG) is an important biosignal that contributes to MSD’s clinical diagnose and recovery process. Conventional conductive electrode has many disadvantages in the continuous EMG measurement application. This research has design a new surface EMG biosensor based on the parallel-plate capacitive coupling principle. The biosensor is developed by using a double-sided PCB with having one side of the PCB use to construct high input impedance circuitry while the other side of the copper (CU) plate function as biosignal sensing metal plate. The metal plate is insulated using kapton tape for contactless application. The result implicates that capacitive biosensor is capable to constantly capture EMG signal without having galvanic contact to human skin surface. However, there are noticeable noise couple into the measured signal. Post signal processing is needed in order to present a clean and significant EMG signal. A complete design of single ended, non-contact, high input impedance, front end EMG biosensor is presented in this paper.

Keywords: contactless, capacitive, biosensor, electromyography

Procedia PDF Downloads 444
2255 Improved Simultaneous Performance in the Time Domain and in the Frequency Domain

Authors: Azeddine Ghodbane, David Bensoussan, Maher Hammami

Abstract:

An innovative approach for controlling unstable and invertible systems has demonstrated superior performance compared to conventional controllers. It has been successfully applied to a levitation system and drone control. Simulations have yielded satisfactory performances when applied to a satellite antenna controller. This design method, based on sensitivity analysis, has also been extended to handle multivariable unstable and invertible systems that exhibit dominant diagonal characteristics at high frequencies, enabling decentralized control. Furthermore, this control method has been expanded to the realm of adaptive control. In this study, we introduce an alternative adaptive architecture that enhances both time and frequency performance, helpfully mitigating the effects of disturbances from the input plant and external disturbances affecting the output. To facilitate superior performance in both the time and frequency domains, we have developed user-friendly interactive design methods using the GeoGebra platform.

Keywords: control theory, decentralized control, sensitivity theory, input-output stability theory, robust multivariable feedback control design

Procedia PDF Downloads 106
2254 Linking Milk Price and Production Costs with Greenhouse Gas Emissions of Luxembourgish Dairy Farms

Authors: Rocco Lioy, Tom Dusseldorf, Aline Lehnen, Romain Reding

Abstract:

A study concerning both the rentability and ecological performance of dairy production in Luxembourg was carried out for the years 2017, 2018 and 2019. The data of 100 dairy farms, referring to the Greenhouse gas emissions (ecology) and the profitability (economy) of dairy production, were evaluated, and the average was compared to the corresponding figures of 80 Luxembourgish dairy farms evaluated in the years 2014, 2015 and 2016. The ecological evaluation could confirm that farm efficiency (especially defined as the lowest ratio between used feedstuff and produced milk) is the key driver for significantly reducing the level of emissions in dairy farms. In both farm groups and in the two periods, the efficient farms show almost the same level of emissions per kg ECM (1,17 kg CO2-eq) in comparison with intensive farms (1,13 kg CO2-eq), and at the same time a by far lowest level of emissions related to the production surface (9,9 vs. 13,9 t CO2-eq/ha). Concerning the economic performances, it could be observed that in the years 2017, 2018 and 2019, the intensive farms (we define intensity in the first place in terms of produced milk pro ha) reached a higher profit (incomes minus costs, only consideration for subsidies) than the efficient farms (4,8 vs. 2,6 €-cent/kg ECM), in contradiction with the observation of the years 2014, 2015 and 2015 (1,5 vs. 3,7 €-cent/kg ECM). The most important reason for this divergent behavior was a change in income and cost structure in the considered periods. In the last period (2017, 2018 and 2019), the milk price was considerably higher than in the previous period, and the production costs were lower. This was of advantage for intensive farms, which produce the highest quantity of milk with a high amount of production means. In the period 2014, 2015 and 2016, with lower milk prices but comparable production costs, the advantage was with efficient farms. In conclusion, we expect that in the next future, when especially the production costs will presumably be much higher than in the last years, the profitableness of dairy farming will decrease. In this case, we assume that efficient farms will provide not only an ecologically but also an economically better performance than production-intensive farms. High milk prices and low production costs are no good incentives for carbon-smart farming.

Keywords: efficiency, intensity, dairy, emissions, prices, costs

Procedia PDF Downloads 82
2253 A Multi-Regional Structural Path Analysis of Virtual Water Flows Caused by Coal Consumption in China

Authors: Cuiyang Feng, Xu Tang, Yi Jin

Abstract:

Coal is the most important primary energy source in China, which exerts a significant influence on the rapid economic growth. However, it makes the water resources to be a constraint on coal industry development, on account of the reverse geographical distribution between coal and water. To ease the pressure on water shortage, the ‘3 Red Lines’ water policies were announced by the Chinese government, and then ‘water for coal’ plan was added to that policies in 2013. This study utilized a structural path analysis (SPA) based on the multi-regional input-output table to quantify the virtual water flows caused by coal consumption in different stages. Results showed that the direct water input (the first stage) was the highest amount in all stages of coal consumption, accounting for approximately 30% of total virtual water content. Regional analysis demonstrated that virtual water trade alleviated the pressure on water use for coal consumption in water shortage areas, but the import of virtual water was not from the areas which are rich in water. Sectoral analysis indicated that the direct inputs from the sectors of ‘production and distribution of electric power and heat power’ and ‘Smelting and pressing of metals’ took up the major virtual water flows, while the sectors of ‘chemical industry’ and ‘manufacture of non-metallic mineral products’ importantly but indirectly consumed the water. With the population and economic growth in China, the water demand-and-supply gap in coal consumption would be more remarkable. In additional to water efficiency improvement measures, the central government should adjust the strategies of the virtual water trade to address local water scarcity issues. Water resource as the main constraints should be highly considered in coal policy to promote the sustainable development of the coal industry.

Keywords: coal consumption, multi-regional input-output model, structural path analysis, virtual water

Procedia PDF Downloads 299
2252 The Analysis of Thermal Conductivity in Porcine Meat Due to Electricity by Finite Element Method

Authors: Orose Rugchati, Sarawut Wattanawongpitak

Abstract:

This research studied the analysis of the thermal conductivity and heat transfer in porcine meat due to the electric current flowing between the electrode plates in parallel. Hot-boned pork sample was prepared in 2*1*1 cubic centimeter. The finite element method with ANSYS workbench program was applied to simulate this heat transfer problem. In the thermal simulation, the input thermoelectric energy was calculated from measured current that flowing through the pork and the input voltage from the dc voltage source. The comparison of heat transfer in pork according to two voltage sources: DC voltage 30 volts and dc pulsed voltage 60 volts (pulse width 50 milliseconds and 50 % duty cycle) were demonstrated. From the result, it shown that the thermal conductivity trends to be steady at temperature 40C and 60C around 1.39 W/mC and 2.65 W/mC for dc voltage source 30 volts and dc pulsed voltage 60 volts, respectively. For temperature increased to 50C at 5 minutes, the appearance color of porcine meat at the exposer point has become to fade. This technique could be used for predicting of thermal conductivity caused by some meat’s characteristics.

Keywords: thermal conductivity, porcine meat, electricity, finite element method

Procedia PDF Downloads 134
2251 Analysis and Design of Dual-Polarization Antennas for Wireless Communication Systems

Authors: Vladimir Veremey

Abstract:

The paper describes the design and simulation of dual-polarization antennas that use the resonance and radiating properties of the H00 mode of metal open waveguides. The proposed antennas are formed by two orthogonal slots in a finite conducting ground plane. The slots are backed by metal screens connected to the ground plane forming open waveguides. It has been shown that the antenna designs can be efficiently used in mm-wave bands. The antenna single mode operational bandwidth is higher than 10%. The antenna designs are very simple and low-cost. They allow flush installation and can be efficiently used in various communication and remote sensing devices on fast moving carriers. Mutual coupling between antennas of the proposed design is very low. Thus, multiple antenna structures with proposed antennas can be efficiently employed in multi-band and in multiple-input-multiple-output (MIMO) systems.

Keywords: antenna, antenna arrays, Multiple-Input-Multiple-Output (MIMO), millimeter wave bands, slot antenna, flush installation, directivity, open waveguide, conformal antennas

Procedia PDF Downloads 161
2250 Climate Change and the Role of Foreign-Invested Enterprises

Authors: Xuemei Jiang, Kunfu Zhu, Shouyang Wang

Abstract:

In this paper, we selected China as a case and employ a time-series of unique input-output tables distinguishing firm ownership and processing exports, to evaluate the role of foreign-invested enterprises (FIEs) in China’s rapid carbon dioxide emission growth. The results suggested that FIEs contributed to 11.55% of the economic outputs’ growth in China between 1992-2010, but accounted for only 9.65% of the growth of carbon dioxide emissions. In relative term, until 2010 FIEs still emitted much less than Chinese-owned enterprises (COEs) when producing the same amount of outputs, although COEs experienced much faster technology upgrades. In an ideal scenario where we assume the final demands remain unchanged and COEs completely mirror the advanced technologies of FIEs, more than 2000 Mt of carbon dioxide emissions would be reduced for China in 2010. From a policy perspective, the widespread FIEs are very effective and efficient channel to encourage technology transfer from developed to developing countries.

Keywords: carbon dioxide emissions, foreign-invested enterprises, technology transfer, input–output analysis, China

Procedia PDF Downloads 393
2249 Studies on Microstructure and Mechanical Properties of Simulated Heat Affected Zone in a Micro Alloyed Steel

Authors: Sanjeev Kumar, S. K. Nath

Abstract:

Proper selection of welding parameters for getting excellent weld is a challenge. HAZ simulation helps in identifying suitable welding parameters like heating rate, cooling rate, peak temperature, and energy input. In this study, the influence of weld thermal cycle of heat affected zone (HAZ) is simulated for Submerged Arc Welding (SAW) using Gleeble ® 3800 thermomechanical simulator. A (Micro-alloyed) MA steel plate of thickness 18 mm having yield strength 450MPa is used for making test specimens. Determination of the mechanical properties of weld simulated specimens including Charpy V-notch toughness and hardness is performed. Peak temperatures of 1300°C, 1150°C, 1000°C, 900°C, 800°C, heat energy input of 22KJ/cm and preheat temperatures of 30°C have been used with Rykalin-3D simulation model. It is found that the impact toughness (75J) is the best for the simulated HAZ specimen at the peak temperature 900ºC. For parent steel, impact toughness value is 26.8J at -50°C in transverse direction.

Keywords: HAZ simulation, mechanical properties, peak temperature, ship hull steel, weldability

Procedia PDF Downloads 560
2248 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is the coagulation using alum and poly aluminum chloride (PACL), and the value of usage per day is a hundred thousand baht. Therefore, determining the dosage of alum and PACL are the most important factors to be prescribed. Water production is economical and valuable. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for prediction chemical dose used to coagulation such as alum and PACL, which input data consists of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of Bangkhen water treatment plant (BKWTP) Metropolitan Waterworks Authority. The data collected from 1 January 2019 to 31 December 2019 cover changing seasons of Thailand. The input data of ANN is divided into three groups training set, test set, and validation set, which the best model performance with a coefficient of determination and mean absolute error of alum are 0.73, 3.18, and PACL is 0.59, 3.21 respectively.

Keywords: soft jar test, jar test, water treatment plant process, artificial neural network

Procedia PDF Downloads 159
2247 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: data estimation, link data, machine learning, road network

Procedia PDF Downloads 507
2246 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions

Authors: Alexander Vaninsky

Abstract:

The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.

Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models

Procedia PDF Downloads 311
2245 Adaptive Backstepping Control of Uncertain Nonlinear Systems with Input Backlash

Authors: Ali Anwar, Hu Qinglei, Li Bo, Muhammad Taha Ali

Abstract:

In this paper a generic model of perturbed nonlinear systems is considered which is affected by hard backlash nonlinearity at the input. The nonlinearity is modelled by a dynamic differential equation which presents a more precise shape as compared to the existing linear models and is compatible with nonlinear design technique such as backstepping. Moreover, a novel backstepping based nonlinear control law is designed which explicitly incorporates a continuous-time adaptive backlash inverse model. It provides a significant flexibility to control engineers, whereby they can use the estimated backlash spacing value specified on actuators such as gears etc. in the adaptive Backlash Inverse model during the control design. It ensures not only global stability but also stringent transient performance with desired precision. It is also robust to external disturbances upon which the bounds are taken as unknown and traverses the backlash spacing efficiently with underestimated information about the actual value. The continuous-time backlash inverse model is distinguished in the sense that other models are either discrete-time or involve complex computations. Furthermore, numerical simulations are presented which not only illustrate the effectiveness of proposed control law but also its comparison with PID and other backstepping controllers.

Keywords: adaptive control, hysteresis, backlash inverse, nonlinear system, robust control, backstepping

Procedia PDF Downloads 456
2244 Forecasting the Influences of Information and Communication Technology on the Structural Changes of Japanese Industrial Sectors: A Study Using Statistical Analysis

Authors: Ubaidillah Zuhdi, Shunsuke Mori, Kazuhisa Kamegai

Abstract:

The purpose of this study is to forecast the influences of Information and Communication Technology (ICT) on the structural changes of Japanese economies based on Leontief Input-Output (IO) coefficients. This study establishes a statistical analysis to predict the future interrelationships among industries. We employ the Constrained Multivariate Regression (CMR) model to analyze the historical changes of input-output coefficients. Statistical significance of the model is then tested by Likelihood Ratio Test (LRT). In our model, ICT is represented by two explanatory variables, i.e. computers (including main parts and accessories) and telecommunications equipment. A previous study, which analyzed the influences of these variables on the structural changes of Japanese industrial sectors from 1985-2005, concluded that these variables had significant influences on the changes in the business circumstances of Japanese commerce, business services and office supplies, and personal services sectors. The projected future Japanese economic structure based on the above forecast generates the differentiated direct and indirect outcomes of ICT penetration.

Keywords: forecast, ICT, industrial structural changes, statistical analysis

Procedia PDF Downloads 369
2243 Hierarchical Scheme for Detection of Rotating Mimo Visible Light Communication Systems Using Mobile Phone Camera

Authors: Shih-Hao Chen, Chi-Wai Chow

Abstract:

Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) visible light communication (VLC) system. The MIMO VLC system using the popular mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from n x n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding the received RGB LED array signals is detecting the direction of received array signals. If the LED transmitter (Tx) is rotated, the signal may not be received correctly and cause an error in the received signal. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n x n RGB LED array as the MIMO Tx. A novel two dimension Hadamard coding scheme is proposed and demonstrated. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.

Keywords: Visible Light Communication (VLC), Multiple-input and multiple-output (MIMO), Red-Green-Blue (RGB), Hadamard coding scheme

Procedia PDF Downloads 412
2242 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 70
2241 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 289
2240 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems

Authors: Atrin Barzegar, Yas Barzegar, Stefano Marrone, Francesco Bellini, Laura Verde

Abstract:

The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.

Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment

Procedia PDF Downloads 69
2239 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 50
2238 Efficient Energy Extraction Circuit for Impact Harvesting from High Impedance Sources

Authors: Sherif Keddis, Mohamed Azzam, Norbert Schwesinger

Abstract:

Harvesting mechanical energy from footsteps or other impacts is a possibility to enable wireless autonomous sensor nodes. These can be used for a highly efficient control of connected devices such as lights, security systems, air conditioning systems or other smart home applications. They can also be used for accurate location or occupancy monitoring. Converting the mechanical energy into useful electrical energy can be achieved using the piezoelectric effect offering simple harvesting setups and low deflections. The challenge facing piezoelectric transducers is the achievable amount of energy per impact in the lower mJ range and the management of such low energies. Simple setups for energy extraction such as a full wave bridge connected directly to a capacitor are problematic due to the mismatch between high impedance sources and low impedance storage elements. Efficient energy circuits for piezoelectric harvesters are commonly designed for vibration harvesters and require periodic input energies with predictable frequencies. Due to the sporadic nature of impact harvesters, such circuits are not well suited. This paper presents a self-powered circuit that avoids the impedance mismatch during energy extraction by disconnecting the load until the source reaches its charge peak. The switch is implemented with passive components and works independent from the input frequency. Therefore, this circuit is suited for impact harvesting and sporadic inputs. For the same input energy, this circuit stores 150% of the energy in comparison to a directly connected capacitor to a bridge rectifier. The total efficiency, defined as the ratio of stored energy on a capacitor to available energy measured across a matched resistive load, is 63%. Although the resulting energy is already sufficient to power certain autonomous applications, further optimization of the circuit are still under investigation in order to improve the overall efficiency.

Keywords: autonomous sensors, circuit design, energy harvesting, energy management, impact harvester, piezoelectricity

Procedia PDF Downloads 144
2237 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 376
2236 An Approach to Maximize the Influence Spread in the Social Networks

Authors: Gaye Ibrahima, Mendy Gervais, Seck Diaraf, Ouya Samuel

Abstract:

In this paper, we consider the influence maximization in social networks. Here we give importance to initial diffuser called the seeds. The goal is to find efficiently a subset of k elements in the social network that will begin and maximize the information diffusion process. A new approach which treats the social network before to determine the seeds, is proposed. This treatment eliminates the information feedback toward a considered element as seed by extracting an acyclic spanning social network. At first, we propose two algorithm versions called SCG − algoritm (v1 and v2) (Spanning Connected Graphalgorithm). This algorithm takes as input data a connected social network directed or no. And finally, a generalization of the SCG − algoritm is proposed. It is called SG − algoritm (Spanning Graph-algorithm) and takes as input data any graph. These two algorithms are effective and have each one a polynomial complexity. To show the pertinence of our approach, two seeds set are determined and those given by our approach give a better results. The performances of this approach are very perceptible through the simulation carried out by the R software and the igraph package.

Keywords: acyclic spanning graph, centrality measures, information feedback, influence maximization, social network

Procedia PDF Downloads 242
2235 Environmental Evaluation of Two Kind of Drug Production (Syrup and Pomade Form) Using Life Cycle Assessment Methodology

Authors: H. Aksas, S. Boughrara, K. Louhab

Abstract:

The goal of this study was the use of life cycle assessment (LCA) methodology to assess the environmental impact of pharmaceutical product (four kinds of syrup form and tree kinds of pomade form), which are produced in one leader manufactory in Algeria town that is SAIDAL Company. The impacts generated have evaluated using SimpaPro7.1 with CML92 Method for syrup form and EPD 2007 for pomade form. All impacts evaluated have compared between them, with determination of the compound contributing to each impacts in each case. Data needed to conduct Life Cycle Inventory (LCI) came from this factory, by the collection of theoretical data near the responsible technicians and engineers of the company, the practical data are resulting from the assay of pharmaceutical liquid, obtained at the laboratories of the university. This data represent different raw material imported from European and Asian country necessarily to formulate the drug. Energy used is coming from Algerian resource for the input. Outputs are the result of effluent analysis of this factory with different form (liquid, solid and gas form). All this data (input and output) represent the ecobalance.

Keywords: pharmaceutical product, drug residues, LCA methodology, environmental impacts

Procedia PDF Downloads 243
2234 A Study on Long Life Hybrid Battery System Consists of Ni-63 Betavoltaic Battery and All Solid Battery

Authors: Bosung Kim, Youngmok Yun, Sungho Lee, Chanseok Park

Abstract:

There is a limitation to power supply and operation by the chemical or physical battery in the space environment. Therefore, research for utilizing nuclear energy in the universe has been in progress since the 1950s, around the major industrialized countries. In this study, the self-rechargeable battery having a long life relative to the half-life of the radioisotope is suggested. The hybrid system is composed of betavoltaic battery, all solid battery and energy harvesting board. Betavoltaic battery can produce electrical power at least 10 years over using the radioisotope from Ni-63 and the silicon-based semiconductor. The electrical power generated from the betavoltaic battery is stored in the all-solid battery and stored power is used if necessary. The hybrid system board is composed of input terminals, boost circuit, charging terminals and output terminals. Betavoltaic and all solid batteries are connected to the input and output terminal, respectively. The electric current of 10 µA is applied to the system board by using the high-resolution power simulator. The system efficiencies are measured from a boost up voltage of 1.8 V, 2.4 V and 3 V, respectively. As a result, the efficiency of system board is about 75% after boosting up the voltage from 1V to 3V.

Keywords: isotope, betavoltaic, nuclear, battery, energy harvesting

Procedia PDF Downloads 315
2233 Operator Optimization Based on Hardware Architecture Alignment Requirements

Authors: Qingqing Gai, Junxing Shen, Yu Luo

Abstract:

Due to the hardware architecture characteristics, some operators tend to acquire better performance if the input/output tensor dimensions are aligned to a certain minimum granularity, such as convolution and deconvolution commonly used in deep learning. Furthermore, if the requirements are not met, the general strategy is to pad with 0 to satisfy the requirements, potentially leading to the under-utilization of the hardware resources. Therefore, for the convolution and deconvolution whose input and output channels do not meet the minimum granularity alignment, we propose to transfer the W-dimensional data to the C-dimension for computation (W2C) to enable the C-dimension to meet the hardware requirements. This scheme also reduces the number of computations in the W-dimension. Although this scheme substantially increases computation, the operator’s speed can improve significantly. It achieves remarkable speedups on multiple hardware accelerators, including Nvidia Tensor cores, Qualcomm digital signal processors (DSPs), and Huawei neural processing units (NPUs). All you need to do is modify the network structure and rearrange the operator weights offline without retraining. At the same time, for some operators, such as the Reducemax, we observe that transferring the Cdimensional data to the W-dimension(C2W) and replacing the Reducemax with the Maxpool can accomplish acceleration under certain circumstances.

Keywords: convolution, deconvolution, W2C, C2W, alignment, hardware accelerator

Procedia PDF Downloads 98
2232 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications

Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna

Abstract:

Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.

Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality

Procedia PDF Downloads 118
2231 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment

Authors: Fares Laouacheria, Said Kechida, Moncef Chabi

Abstract:

The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.

Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model

Procedia PDF Downloads 265
2230 Finding a Redefinition of the Relationship between Rural and Urban Knowledge

Authors: Bianca Maria Rulli, Lenny Valentino Schiaretti

Abstract:

The considerable recent urbanization has increasingly sharpened environmental and social problems all over the world. During the recent years, many answers to the alarming attitudes in modern cities have emerged: a drastic reduction in the rate of growth is becoming essential for future generations and small scale economies are considered more adaptive and sustainable. According to the concept of degrowth, cities should consider surpassing the centralization of urban living by redefining the relationship between rural and urban knowledge; growing food in cities fundamentally contributes to the increase of social and ecological resilience. Through an innovative approach, this research combines the benefits of urban agriculture (increase of biological diversity, shorter and thus more efficient supply chains, food security) and temporary land use. They stimulate collaborative practices to satisfy the changing needs of communities and stakeholders. The concept proposes a coherent strategy to create a sustainable development of urban spaces, introducing a productive green-network to link specific areas in the city. By shifting the current relationship between architecture and landscape, the former process of ground consumption is deeply revised. Temporary modules can be used as concrete tools to create temporal areas of innovation, transforming vacant or marginal spaces into potential laboratories for the development of the city. The only permanent ground traces, such as foundations, are minimized in order to allow future land re-use. The aim is to describe a new mindset regarding the quality of space in the metropolis which allows, in a completely flexible way, to bring back the green and the urban farming into the cities. The wide possibilities of the research are analyzed in two different case-studies. The first is a regeneration/connection project designated for social housing, the second concerns the use of temporary modules to answer to the potential needs of social structures. The intention of the productive green-network is to link the different vacant spaces to each other as well as to the entire urban fabric. This also generates a potential improvement of the current situation of underprivileged and disadvantaged persons.

Keywords: degrowth, green network, land use, temporary building, urban farming

Procedia PDF Downloads 495
2229 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 79
2228 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 123