Search results for: cost prediction
Assessment of the Growth Enhancement Support Scheme in Adamawa State, Nigeria
Authors: Oto J. Okwu, Ornan Henry, Victor A. Otene
Abstract:
The agricultural sector contributes a great deal to the sustenance of Nigeria’s food security and economy, with an attendant impact on rural development. In spite of the relatively high number of farmers in the country, self-sufficiency in food production is still a challenge. Farmers are faced with myriad problems which hinder their production efficiency, one of which is their access to agricultural inputs required for optimum production. To meet the challenges faced by farmers, the government at the federal level has come up with many agricultural policies, one of which is the Agricultural Transformation Agenda (ATA). The Growth Enhancement Support Scheme (GESS) is one of the critical components of ATA, which is aimed at ensuring the effective distribution of agricultural inputs delivered directly to farmers, and at a regulated cost. After about 8 years of launching this policy, it will be necessary to carry out an assessment of GESS and determine the impact it has made on rural farmers with respect to their access to farm inputs. This study was carried out to assess the Growth Enhancement Support Scheme (GESS) in Adamawa State, Nigeria. Crop farmers who registered under the GESS in Adamawa State, Nigeria, formed the population for the study. Primary data for the study were obtained through a survey, and the use of a structured questionnaire. A sample size of 167 respondents was selected using multi-stage, purposive, and random sampling techniques. The validity and reliability of the research instrument (questionnaire) were obtained through pilot testing and test-retest method, respectively. The objectives of the study were to determine the difference in the level of access to agricultural inputs before and after GESS, determine the difference in cost of agricultural inputs before and after GESS, and to determine the challenges faced by rural farmers in accessing agricultural inputs through GESS. Both descriptive and inferential statistics were used in analyzing the collected data. Specifically, Mann-Whitney, student t-test, and factor analysis were used to test the stated hypotheses. Research findings revealed there was a significant difference in the level of access to farm inputs after the introduction of GESS (Z=14.216). Also, there was a significant difference in the cost of agro-inputs after the introduction of GESS (Pr |T| > |t|= 0.0000). The challenges faced by respondents in accessing agro-inputs through GESS were administrative and technical in nature. Based on the findings of the research, it was recommended that efforts be made by the government to sustain the GESS, as it has significantly improved the level of farmers’ access to agricultural inputs and has reduced the cost of agro-inputs, while administrative challenges faced by the respondents in accessing inputs be addressed by the government, and extension agents assist the farmers to overcome the technical challenges they face in accessing inputs.Keywords: agricultural policy, agro-inputs, assessment, growth enhancement support scheme, rural farmers
Procedia PDF Downloads 113A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 178Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution
Procedia PDF Downloads 510The Effect of Proper Drainage on the Cost of Building and Repairing Roads
Authors: Seyed Abbas Tabatabaei, Saeid Amini, Hamid Reza Ghafouri
Abstract:
One of the most important factors in flexible pavement failure is the lack of proper drainage along the roads. Water on the Paving Systems is one of the main parameters of pavement failure. Though, if water is discharged without delay and prior to discharge in order to prevent damaging the pavement the lifetime of the pavement will be considerably increased. In this study, duration of water stay and materials properties in pavement systems and the effects of aggregate gradation, and hydraulic conductivity of the drainage rate and Effects of subsurface drainage systems, drainage and reduction in the lifetime of the pavement have been studied. The study conducted in accordance with the terms offered can be concluded as under. The more hydraulic conductivity the less drainage time and the use of sub-surface drainage system causes two to three times of the pavement lifetime. In this research it has been tried by study and calculate the drained and undrained pavements lifetime by considering the effectiveness of water and drainage coefficient on flexible materials modulus and by using KENLAYER software to compare the present value cost of these pavements has been paid for a 20 year lifetime design. In this study, 14 pavement sections have been considered, of which 7 sections have been drained and 7 other not. Results show that drained pavements have more initial costs but the failure severity is so little in them and have longer lifetime for a 20 year lifetime design, the drained pavements seem so economic.Keywords: drainage, base and sub-base, elasticity modulus, aggregation
Procedia PDF Downloads 374Square Concrete Columns under Axial Compression
Authors: Suniti Suparp, Panuwat Joyklad, Qudeer Hussain
Abstract:
This is a well-known fact that the actual latera forces due to natural disasters, for example, earthquakes, floods and storms are difficult to predict accurately. Among these natural disasters, so far, the highest amount of deaths and injuries have been recorded for the case of earthquakes all around the world. Therefore, there is always an urgent need to establish suitable strengthening methods for existing concrete and steel structures. This paper is investigating the structural performance of square concrete columns strengthened using low cost and easily available steel clamps. The salient features of these steel clamps are comparatively low cost, easy availability and ease of installation. To achieve research objectives, a large-scale experimental program was established in which a total number of 12 square concrete columns were constructed and tested under pure axial compression. Three square concrete columns were tested without any steel lamps to serve as a reference specimen. Whereas, remaining concrete columns were externally strengthened using steel clamps. The steel clamps were installed at a different spacing to investigate the best configuration of the steel clamps. The experimental results indicate that steel clamps are very effective in altering the structural performance of the square concrete columns. The square concrete columns externally strengthened using steel clamps demonstrate higher load carrying capacity and ductility as compared with the control specimens.Keywords: concrete, strength, ductility, pre-stressed, steel, clamps, axial compression, columns, stress and strain
Procedia PDF Downloads 133Evaluating the Performance of Passive Direct Methanol Fuel Cell under Varying Operating and Structural Conditions
Authors: Rahul Saraswat
Abstract:
More recently, a focus has been given to replacing machined stainless steel metal flow fields with inexpensive wire mesh current collectors. The flow fields are based on simple woven wire mesh screens of various stainless steels, which are sandwiched between a thin metal plate of the same material to create a bipolar plate/flow field configuration for use in a stack. Major advantages of using stainless steel wire screens include the elimination of expensive raw materials as well as machining and/or other special fabrication costs. The objective of the project is to improve the performance of the passive direct methanol fuel cell without increasing the cost of the cell and to make it as compact and light as possible. From the literature survey, it was found that very little is done in this direction, and the following methodology was used. 1. The passive direct methanol fuel cell (DMFC) can be made more compact, lighter, and less costly by changing the material used in its construction. 2. Controlling the fuel diffusion rate through the cell improves the performance of the cell. A passive liquid feed direct methanol fuel cell (DMFC) was fabricated using a given MEA (Membrane Electrode Assembly) and tested for different current collector structures. Mesh current collectors of different mesh densities along with different support structures, were used, and the performance was found to be better. Methanol concentration was also varied. Optimisation of mesh size, support structure, and fuel concentration was achieved. Cost analysis was also performed hereby. From the performance analysis study of DMFC, we can conclude with the following points: Area specific resistance (ASR) of wire mesh current collectors is lower than the ASR of stainless steel current collectors. Also, the power produced by wire mesh current collectors is always more than that produced by stainless steel current collectors. 1. Low or moderate methanol concentrations should be used for better and stable DMFC performance. 2. Wiremesh is a good substitute for stainless steel for current collector plates of passive DMFC because of its lower cost (by about 27 %), flexibility, and light in weight characteristics of wire mesh.Keywords: direct methanol fuel cell, membrane electrode assembly, mesh, mesh size, methanol concentration, support structure
Procedia PDF Downloads 84Central Energy Management for Optimizing Utility Grid Power Exchange with a Network of Smart Homes
Authors: Sima Aznavi, Poria Fajri, Hanif Livani
Abstract:
Smart homes are small energy systems which may be equipped with renewable energy sources, storage devices, and loads. Energy management strategy plays a main role in the efficient operation of smart homes. Effective energy scheduling of the renewable energy sources and storage devices guarantees efficient energy management in households while reducing the energy imports from the grid. Nevertheless, despite such strategies, independently day ahead energy schedules for multiple households can cause undesired effects such as high power exchange with the grid at certain times of the day. Therefore, the interactions between multiple smart home day ahead energy projections is a challenging issue in a smart grid system and if not managed appropriately, the imported energy from the power network can impose additional burden on the distribution grid. In this paper, a central energy management strategy for a network consisting of multiple households each equipped with renewable energy sources, storage devices, and Plug-in Electric Vehicles (PEV) is proposed. The decision-making strategy alongside the smart home energy management system, minimizes the energy purchase cost of the end users, while at the same time reducing the stress on the utility grid. In this approach, the smart home energy management system determines different operating scenarios based on the forecasted household daily load and the components connected to the household with the objective of minimizing the end user overall cost. Then, selected projections for each household that are within the same cost range are sent to the central decision-making system. The central controller then organizes the schedules to reduce the overall peak to average ratio of the total imported energy from the grid. To validate this approach simulations are carried out for a network of five smart homes with different load requirements and the results confirm that by applying the proposed central energy management strategy, the overall power demand from the grid can be significantly flattened. This is an effective approach to alleviate the stress on the network by distributing its energy to a network of multiple households over a 24- hour period.Keywords: energy management, renewable energy sources, smart grid, smart home
Procedia PDF Downloads 253Microalgae Applied to the Reduction of Biowaste Produced by Fruit Fly Drosophila melanogaster
Authors: Shuang Qiu, Zhipeng Chen, Lingfeng Wang, Shijian Ge
Abstract:
Biowastes are a concern due to the large amounts of commercial food required for model animals during the biomedical research. Searching for sustainable food alternatives with negligible physiological effects on animals is critical to solving or reducing this challenge. Microalgae have been demonstrated as suitable for both human consumption and animal feed in addition to biofuel and bioenergy applications. In this study, the possibility of using Chlorella vulgaris and Senedesmus obliquus as a feed replacement to Drosophila melanogaster, one of the fly models commonly used in biomedical studies, was investigated to assess the fly locomotor activity, motor pattern, lifespan, and body weight. Compared to control, flies fed on 60% or 80% (w/w) microalgae exhibited varied walking performance including travel distance and apparent step size, and flies treated with 40% microalgae had shorter lifespans and decreased body weight. However, the 20% microalgae treatment showed no statistical differences in all parameters tested with respect to the control. When partially including 20% microalgae in the standard food, it can annually reduce the food waste (~ 202 kg) by 22.7 % and save $ 7,200 of the food cost, offering an environmentally superior and cost-effective food alternative without compromising physiological performance.Keywords: animal feed, Chlorella vulgaris, Drosophila melanogaster, food waste, microalgae
Procedia PDF Downloads 171Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 370Use of Low-Cost Hydrated Hydrogen Sulphate-Based Protic Ionic Liquids for Extraction of Cellulose-Rich Materials from Common Wheat (Triticum Aestivum) Straw
Authors: Chris Miskelly, Eoin Cunningham, Beatrice Smyth, John. D. Holbrey, Gosia Swadzba-Kwasny, Emily L. Byrne, Yoan Delavoux, Mantian Li.
Abstract:
Recently, the use of ionic liquids (ILs) for the preparation of lignocellulose derived cellulosic materials as alternatives to petrochemical feedstocks has been the focus of considerable research interest. While the technical viability of IL-based lignocellulose treatment methodologies has been well established, the high cost of reagents inhibits commercial feasibility. This work aimed to assess the technoeconomic viability of the preparation of cellulose rich materials (CRMs) using protic ionic liquids (PILs) synthesized from low cost alkylamines and sulphuric acid. For this purpose, the tertiary alkylamines, triethylamine, and dimethylbutylamine were selected. Bulk scale production cost of the synthesized PILs, triethylammonium hydrogen sulphate and dimetheylbutylammonium hydrogen sulphate, was reported as $0.78 kg-1 to $1.24 kg-1. CRMs were prepared through the treatment of common wheat (Triticum aestivum) straw with these PILs. By controlling treatment parameters, CRMs with a cellulose content of ≥ 80 wt% were prepared. This was achieved using a T. aestivum straw to PIL loading ratio of 1:15 w/w, a treatment duration of 180 minutes, and ethanol as a cellulose antisolvent. Infrared spectra data and decreased onset degradation temperature of CRMs (ΔTONSET ~ 70 °C) suggested the formation of cellulose sulphate esters during treatment. Chemical derivatisation can aid the dispersion of prepared CRMs in non-polar polymer/ composite matrices, but act as a barrier to thermal processing at temperatures above 150 °C. It was also shown that treatment increased the crystallinity of CRMs (ΔCrI ~ 40 %) without altering the native crystalline structure or crystallite size (~ 2.6 nm) of cellulose; peaks associated with the cellulose I crystalline planes (110), (200), and (004) were observed at Bragg angles 16.0 °, 22.5 ° and 35.0 ° respectively. This highlighted the inability of assessed PILs to dissolve crystalline cellulose and was attributed to the high acidity (pKa ~ - 1.92 to - 6.42) of sulphuric acid derived anions. Electron micrographs revealed that the stratified multilayer tissue structure of untreated T. aestivum straw was significantly modified during treatment. T. aestivum straw particles were disassembled during treatment, with prepared CRMs adopting a golden-brown film-like appearance. This work demonstrated the degradation of non-cellulosic fractions of lignocellulose without dissolution of cellulose. It is the first to report on the derivatisation of cellulose during treatment with protic hydrogen sulphate ionic liquids, and the potential implications of this with reference to biopolymer feedstock preparation.Keywords: cellulose, extraction, protic ionic liquids, esterification, thermal stability, waste valorisation, biopolymer feedstock
Procedia PDF Downloads 44A Low-Cost Long-Range 60 GHz Backhaul Wireless Communication System
Authors: Atabak Rashidian
Abstract:
In duplex backhaul wireless communication systems, two separate transmit and receive high-gain antennas are required if an antenna switch is not implemented. Although the switch loss, which is considerable and in the order of 1.5 dB at 60 GHz, is avoided, the large separate antenna systems make the design bulky and not cost-effective. To avoid two large reflectors for such a system, transmit and receive antenna feeds with a common phase center are required. The phase center should coincide with the focal point of the reflector to maximize the efficiency and gain. In this work, we present an ultra-compact design in which stacked patch antennas are used as the feeds for a 12-inch reflector. The transmit antenna is a 1 × 2 array and the receive antenna is a single element located in the middle of the transmit antenna elements. Antenna elements are designed as stacked patches to provide the required impedance bandwidth for four standard channels of WiGigTM applications. The design includes three metallic layers and three dielectric layers, in which the top dielectric layer is a 100 µm-thick protective layer. The top two metallic layers are specified to the main and parasitic patches. The bottom layer is basically ground plane with two circular openings (0.7 mm in diameter) having a center through via which connects the antennas to a single input/output Si-Ge Bi-CMOS transceiver chip. The reflection coefficient of the stacked patch antenna is fully investigated. The -10 dB impedance bandwidth is about 11%. Although the gap between transmit and receive antenna is very small (g = 0.525 mm), the mutual coupling is less than -12 dB over the desired frequency band. The three dimensional radiation patterns of the transmit and receive reflector antennas at 60 GHz is investigated over the impedance bandwidth. About 39 dBi realized gain is achieved. Considering over 15 dBm of output power of the silicon chip in the transmit side, the EIRP should be over 54 dBm, which is good enough for over one kilometer multi Gbps data communications. The performance of the reflector antenna over the bandwidth shows the peak gain is 39 dBi and 40 dBi for the reflector antenna with 2-element and single element feed, respectively. This type of the system design is cost-effective and efficient.Keywords: Antenna, integrated circuit, millimeter-wave, phase center
Procedia PDF Downloads 123A Study of Effective Event Development and the Sustainability of Tourism Industry in Lagos State, Nigeria
Authors: Olajumoke Elizabeth Olawale-Olakunle
Abstract:
This research examined effective event development on the sustainability of tourism in Lagos State. The objectives were to ascertain the implication of effective event development on cost, environmental innovations, opportunity for participants, job creation and working conditions. Also, there was a focus on employee participation and the sustainability of the tourism industry. However, the primary data were obtained via the use of structured questionnaire administered to the selected respondents. Simple random sampling was used to select the respondents, using the Yaro Yame formula. The formulated hypothesis was tested using Analysis of Variance (ANOVA) and Non-parametric chi-square. From the tests conducted, the results showed that effective event development has helped to reduce costs, bring about environmental innovations, offer unique opportunity among event participants, create jobs and promote better working conditions, and the influence it has on employee participation affects the sustainability of the tourism industry. Based on these results, it was concluded that effective event development helps to achieve sustainability in the tourism industry by reducing costs, ensuring efficient use of tourism resources and offers a unique opportunity among event participants. It was, therefore, recommended that events should be developed in such a way that it can help to reduce cost and help leverage the financial burdens of participants and stakeholders, thereby, achieving sustainability in the tourism industry.Keywords: tourism, hospitality, industry, development
Procedia PDF Downloads 395Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the Spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are Class balancing, Data shuffling, and Standardization were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the Sequential model and Relu activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.Keywords: neural network, pineapple, soluble solid content, spectroscopy
Procedia PDF Downloads 83Natural Gas Production Forecasts Using Diffusion Models
Authors: Md. Abud Darda
Abstract:
Different options for natural gas production in wide geographic areas may be described through diffusion of innovation models. This type of modeling approach provides an indirect estimate of an ultimately recoverable resource, URR, capture the quantitative effects of observed strategic interventions, and allow ex-ante assessments of future scenarios over time. In order to ensure a sustainable energy policy, it is important to forecast the availability of this natural resource. Considering a finite life cycle, in this paper we try to investigate the natural gas production of Myanmar and Algeria, two important natural gas provider in the world energy market. A number of homogeneous and heterogeneous diffusion models, with convenient extensions, have been used. Models validation has also been performed in terms of prediction capability.Keywords: diffusion models, energy forecast, natural gas, nonlinear production
Procedia PDF Downloads 231Integration of Microarray Data into a Genome-Scale Metabolic Model to Study Flux Distribution after Gene Knockout
Authors: Mona Heydari, Ehsan Motamedian, Seyed Abbas Shojaosadati
Abstract:
Prediction of perturbations after genetic manipulation (especially gene knockout) is one of the important challenges in systems biology. In this paper, a new algorithm is introduced that integrates microarray data into the metabolic model. The algorithm was used to study the change in the cell phenotype after knockout of Gss gene in Escherichia coli BW25113. Algorithm implementation indicated that gene deletion resulted in more activation of the metabolic network. Growth yield was more and less regulating gene were identified for mutant in comparison with the wild-type strain.Keywords: metabolic network, gene knockout, flux balance analysis, microarray data, integration
Procedia PDF Downloads 580A Study on Impact of Scheduled Preventive Maintenance on Overall Self-Life as Well as Reduction of Operational down Time of Critical Oil Field Mobile Equipment
Authors: Dipankar Deka
Abstract:
Exploration and production of Oil & Gas is a very challenging business on which a nation’s energy security depends on. The exploration and Production of hydrocarbon is a very precise and time-bound process. The striking rate of hydrocarbon in a drilled well is so uncertain that the success rate is only 31% in 2021 as per Rigzone. Huge cost is involved in drilling as well as the production of hydrocarbon from a well. Due to this very reason, no one can effort to lose a well because of faulty machines, which increases the non-productive time (NPT). Numerous activities that include manpower and machines synchronized together works in a precise way to complete the full cycle of exploration, rig movement, drilling and production of crude oil. There are several machines, both fixed and mobile, are used in the complete cycle. Most of these machines have a tight schedule of work operating in various drilling sites that are simultaneously being drilled, providing a very narrow window for maintenance. The shutdown of any of these machines for even a small period of time delays the whole project and increases the cost of production of hydrocarbon by manifolds. Moreover, these machines are custom designed exclusively for oil field operations to be only used in Mining Exploration Licensed area (MEL) earmarked by the government and are imported and very costly in nature. The cost of some of these mobile units like Well Logging Units, Coil Tubing units, Nitrogen pumping units etc. that are used for Well stimulation and activation process exceeds more than 1 million USD per unit. So the increase of self-life of these units also generates huge revenues during the extended duration of their services. In this paper we are considering the very critical mobile oil field equipment like Well Logging Unit, Coil Tubing unit, well-killing unit, Nitrogen pumping unit, MOL Oil Field Truck, Hot Oil Circulation Unit etc., and their extensive preventive maintenance in our auto workshop. This paper is the outcome of 10 years of structured automobile maintenance and minute documentation of each associated event that allowed us to perform the comparative study between the new practices of preventive maintenance over the age-old practice of system-based corrective maintenance and its impact on the self-life of the equipment.Keywords: automobile maintenance, preventive maintenance, symptom based maintenance, workshop technologies
Procedia PDF Downloads 79Comparative Economic Analysis of Floating Photovoltaic Systems Using a Synthesis Approach
Authors: Ching-Feng Chen
Abstract:
The floating photovoltaic (FPV) system highlights economic benefits and energy performance to carbon dioxide (CO₂) discharges. Due to land resource scarcity and many negligent water territories, such as reservoirs, dams, and lakes in Japan and Taiwan, both countries are actively developing FPV and responding to the pricing of the emissions trading systems (ETS). This paper performs a case study through a synthesis approach to compare the economic indicators between the FPVs of Taiwan’s Agongdian Reservoir and Japan’s Yamakura Dam. The research results show that the metrics of the system capacity, installation costs, bank interest rates, and ETS and Electricity Bills affect FPV operating gains. In the post-Feed-In-Tariff (FIT) phase, investing in FPV in Japan is more profitable than in Taiwan. The former’s positive net present value (NPV), eminent internal rate of return (IRR) (11.6%), and benefit-cost ratio (BCR) above 1 (2.0) at the discount rate of 10% indicate that investing the FPV in Japan is more favorable than in Taiwan. In addition, the breakeven point is modest (about 61.3%.). The presented methodology in the study helps investors evaluate schemes’ pros and cons and determine whether a decision is beneficial while funding PV or FPV projects.Keywords: carbon border adjustment mechanism, floating photovoltaic, emissions trading systems, net present value, internal rate of return, benefit-cost ratio
Procedia PDF Downloads 78Demographic Dividend Explained by Infrastructure Costs of Population Growth Rate, Distinct from Age Dependency
Authors: Jane N. O'Sullivan
Abstract:
Although it is widely believed that fertility decline has benefitted economic advancement, particularly in East and South-East Asian countries, the causal mechanisms for this stimulus are contested. Since the turn of this century, demographic dividend theory has been increasingly recognised, hypothesising that higher proportions of working-age people can contribute to economic expansion if conditions are met to employ them productively. Population growth rate, as a systemic condition distinct from age composition, has not been similar attention since the 1970s and has lacked methodology for quantitative assessment. This paper explores conceptual and empirical quantification of the burden of expanding physical capital to accommodate a growing population. In proof-of-concept analyses of Australia and the United Kingdom, actual expenditure on gross fixed capital formation was compiled over four decades and apportioned to maintenance/turnover or expansion to accommodate population growth, based on lifespan of capital assets and population growth rate. In both countries, capital expansion was estimated to cost 6.5-7.0% of GDP per 1% population growth rate. This opportunity cost impedes the improvement of per capita capacity needed to realise the potential of the working-age population. Economic modelling of demographic scenarios have to date omitted this channel of influence; the implications of its inclusion are discussed.Keywords: age dependency, demographic dividend, infrastructure, population growth rate
Procedia PDF Downloads 149Sustainable Manufacturing of Concentrated Latex and Ribbed Smoked Sheets in Sri Lanka
Authors: Pasan Dunuwila, V. H. L. Rodrigo, Naohiro Goto
Abstract:
Sri Lanka is one the largest natural rubber (NR) producers of the world, where the NR industry is a major foreign exchange earner. Among the locally manufactured NR products, concentrated latex (CL) and ribbed smoked sheets (RSS) hold a significant position. Furthermore, these products become the foundation for many products utilized by the people all over the world (e.g. gloves, condoms, tires, etc.). Processing of CL and RSS costs a significant amount of material, energy, and workforce. With this background, both manufacturing lines have immensely challenged by waste, low productivity, lack of cost efficiency, rising cost of production, and many environmental issues. To face the above challenges, the adaptation of sustainable manufacturing measures that use less energy, water, materials, and produce less waste is imperative. However, these sectors lack comprehensive studies that shed light on such measures and thoroughly discuss their improvement potentials from both environmental and economic points of view. Therefore, based on a study of three CL and three RSS mills in Sri Lanka, this study deploys sustainable manufacturing techniques and tools to uncover the underlying potentials to improve performances in CL and RSS processing sectors. This study is comprised of three steps: 1. quantification of average material waste, economic losses, and greenhouse gas (GHG) emissions via material flow analysis (MFA), material flow cost accounting (MFCA), and life cycle assessment (LCA) in each manufacturing process, 2. identification of improvement options with the help of Pareto and What-if analyses, field interviews, and the existing literature; and 3. validation of the identified improvement options via the re-execution of MFA, MFCA, and LCA. With the help of this methodology, the economic and environmental hotspots, and the degrees of improvement in both systems could be identified. Results highlighted that each process could be improved to have less waste, monetary losses, manufacturing costs, and GHG emissions. Conclusively, study`s methodology and findings are believed to be beneficial for assuring the sustainable growth not only in Sri Lankan NR processing sector itself but also in NR or any other industry rooted in other developing countries.Keywords: concentrated latex, natural rubber, ribbed smoked sheets, Sri Lanka
Procedia PDF Downloads 265Numerical Prediction of Wall Eroded Area by Cavitation
Authors: Ridha Zgolli, Ahmed Belhaj, Maroua Ennouri
Abstract:
This study presents a new method to predict cavitation area that may be eroded. It is based on the post-treatment of URANS simulations in cavitant flows. The most RANS calculations with incompressible consideration are based on cavitation model using mixture fluid with density (ρm) calculated as a function of liquid density (ρliq), vapour or gas density (ρvap) and vapour or gas volume fraction α (ρm = αρvap + (1-α) ρliq). The calculations are performed on hydrofoil geometries and compared with experimental works concerning flows characteristics (size of pocket, pressure, velocity). We present here the used cavitation model and the approach followed to evaluate the value of α fixing the shape of pocket around wall before collapsing.Keywords: flows, CFD, cavitation, erosion
Procedia PDF Downloads 341Fourier Transform and Machine Learning Techniques for Fault Detection and Diagnosis of Induction Motors
Authors: Duc V. Nguyen
Abstract:
Induction motors are widely used in different industry areas and can experience various kinds of faults in stators and rotors. In general, fault detection and diagnosis techniques for induction motors can be supervised by measuring quantities such as noise, vibration, and temperature. The installation of mechanical sensors in order to assess the health conditions of a machine is typically only done for expensive or load-critical machines, where the high cost of a continuous monitoring system can be Justified. Nevertheless, induced current monitoring can be implemented inexpensively on machines with arbitrary sizes by using current transformers. In this regard, effective and low-cost fault detection techniques can be implemented, hence reducing the maintenance and downtime costs of motors. This work proposes a method for fault detection and diagnosis of induction motors, which combines classical fast Fourier transform and modern/advanced machine learning techniques. The proposed method is validated on real-world data and achieves a precision of 99.7% for fault detection and 100% for fault classification with minimal expert knowledge requirement. In addition, this approach allows users to be able to optimize/balance risks and maintenance costs to achieve the highest benet based on their requirements. These are the key requirements of a robust prognostics and health management system.Keywords: fault detection, FFT, induction motor, predictive maintenance
Procedia PDF Downloads 179Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 242On the Design of a Secure Two-Party Authentication Scheme for Internet of Things Using Cancelable Biometrics and Physically Unclonable Functions
Authors: Behnam Zahednejad, Saeed Kosari
Abstract:
Widespread deployment of Internet of Things (IoT) has raised security and privacy issues in this environment. Designing a secure two-factor authentication scheme between the user and server is still a challenging task. In this paper, we focus on Cancelable Biometric (CB) as an authentication factor in IoT. We show that previous CB-based scheme fail to provide real two-factor security, Perfect Forward Secrecy (PFS) and suffer database attacks and traceability of the user. Then we propose our improved scheme based on CB and Physically Unclonable Functions (PUF), which can provide real two-factor security, PFS, user’s unlinkability, and resistance to database attack. In addition, Key Compromise Impersonation (KCI) resilience is achieved in our scheme. We also prove the security of our proposed scheme formally using both Real-Or-Random (RoR) model and the ProVerif analysis tool. For the usability of our scheme, we conducted a performance analysis and showed that our scheme has the least communication cost compared to the previous CB-based scheme. The computational cost of our scheme is also acceptable for the IoT environment.Keywords: IoT, two-factor security, cancelable biometric, key compromise impersonation resilience, perfect forward secrecy, database attack, real-or-random model, ProVerif
Procedia PDF Downloads 105The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 343Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach
Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas
Abstract:
Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality
Procedia PDF Downloads 193Synthesis and Characterization of Hydroxyapatite from Biowaste for Potential Medical Application
Authors: M. D. H. Beg, John O. Akindoyo, Suriati Ghazali, Nitthiyah Jeyaratnam
Abstract:
Over the period of time, several approaches have been undertaken to mitigate the challenges associated with bone regeneration. This includes but not limited to xenografts, allografts, autografts as well as artificial substitutions like bioceramics, synthetic cements and metals. The former three techniques often come along with peculiar limitation and problems such as morbidity, availability, disease transmission, collateral site damage or absolute rejection by the body as the case may be. Synthetic routes remain the only feasible alternative option for treatment of bone defects. Hydroxyapatite (HA) is very compatible and suitable for this application. However, most of the common methods for HA synthesis are either expensive, complicated or environmentally unfriendly. Interestingly, extraction of HA from bio-wastes have been perceived not only to be cost effective, but also environment friendly. In this research, HA was synthesized from bio-waste: namely bovine bones through three different methods which are hydrothermal chemical processes, ultrasound assisted synthesis and ordinary calcination techniques. Structure and property analysis of the HA was carried out through different characterization techniques such as TGA, FTIR, and XRD. All the methods applied were able to produce HA with similar compositional properties to biomaterials found in human calcified tissues. Calcination process was however observed to be more efficient as it eliminated all the organic components from the produced HA. The HA synthesized is unique for its minimal cost and environmental friendliness. It is also perceived to be suitable for tissue and bone engineering applications.Keywords: hydroxyapatite, bone, calcination, biowaste
Procedia PDF Downloads 254Demulsification of Oil from Produced water Using Fibrous Coalescer
Authors: Nutcha Thianbut
Abstract:
In the petroleum drilling industry, besides oil and gas, water is also produced from petroleum production. which will have oil droplets dispersed in the water as an emulsion. Commonly referred to as produced water, most industrial water-based produced water methods use the method of pumping water back into wells or catchment areas. because it cannot be utilized further, but in the compression of water each time, the cost is quite high. And the survey found that the amount of water from the petroleum production process has increased every year. In this research, we would like to study the removal of oil in produced water by the Coalescer device using fibers from agricultural waste as an intermediary. As an alternative to reduce the cost of water management in the petroleum drilling industry. The objectives of this research are 1. To study the fiber pretreatment by chemical process for the efficiency of oil-water separation 2. To study and design the fiber-packed coalescer device to destroy the emulsion of crude oil in water. 3. To study the working conditions of coalescer devices in emulsion destruction. using a fiber medium. In this research, the experiment was divided into two parts. The first part will study the absorbency of fibers. It compares untreated fibers with chemically treated alkaline fibers that change over time as well as adjusting the amount of fiber on the absorbency of the fiber and the second part will study the separation of oil from produced water by Coalescer equipment using fiber as medium to study the optimum condition of coalescer equipment for further development and industrial application.Keywords: produced water, fiber, surface modification, coalescer
Procedia PDF Downloads 170Production of Metal Powder Using Twin Arc Spraying Process for Additive Manufacturing
Authors: D. Chen, H. Daoud, C. Kreiner, U. Glatzel
Abstract:
Additive Manufacturing (AM) provides promising opportunities to optimize and to produce tooling by integrating near-contour tempering channels for more efficient cooling. To enhance the properties of the produced tooling using additive manufacturing, prototypes should be produced in short periods. Thereby, this requires a small amount of tailored powders, which either has a high production cost or is commercially unavailable. Hence, in this study, an arc spray atomization approach to produce a tailored metal powder at a lower cost and even in small quantities, in comparison to the conventional powder production methods, was proposed. This approach involves converting commercially available metal wire into powder by modifying the wire arc spraying process. The influences of spray medium and gas pressure on the powder properties were investigated. As a result, particles with smooth surface and lower porosity were obtained, when nonoxidizing gases are used for thermal spraying. The particle size decreased with increasing of the gas pressure, and the particles sizes are in the range from 10 to 70 µm, which is desirable for selective laser melting (SLM). A comparison of microstructure and mechanical behavior of SLM generated parts using arc sprayed powders (alloy: X5CrNiCuNb 16-4) and commercial powder (alloy: X5CrNiCuNb 16-4) was also conducted.Keywords: additive manufacturing, arc spraying, powder production, selective laser melting
Procedia PDF Downloads 142Analysis of Ferroresonant Overvoltages in Cable-fed Transformers
Authors: George Eduful, Ebenezer A. Jackson, Kingsford A. Atanga
Abstract:
This paper investigates the impacts of cable length and capacity of transformer on ferroresonant overvoltage in cable-fed transformers. The study was conducted by simulation using the EMTP RV. Results show that ferroresonance can cause dangerous overvoltages ranging from 2 to 5 per unit. These overvoltages impose stress on insulations of transformers and cables and subsequently result in system failures. Undertaking Basic Multiple Regression Analysis (BMR) on the results obtained, a statistical model was obtained in terms of cable length and transformer capacity. The model is useful for ferroresonant prediction and control in cable-fed transformers.Keywords: ferroresonance, cable-fed transformers, EMTP RV, regression analysis
Procedia PDF Downloads 539Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp
Authors: Lalit Ahuja, Nancy Das, Yashas Shetty
Abstract:
LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module
Procedia PDF Downloads 71