Search results for: volumetric errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1157

Search results for: volumetric errors

887 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh

Authors: Husain Rajib, K. S. Kishor, D. G. Jewel

Abstract:

Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.

Keywords: refractive error, stigma, knowledge, attitude, practice

Procedia PDF Downloads 243
886 Pre-Implementation of Total Body Irradiation Using Volumetric Modulated Arc Therapy: Full Body Anthropomorphic Phantom Development

Authors: Susana Gonçalves, Joana Lencart, Anabela Gregório Dias

Abstract:

Introduction: In combination with chemotherapy, Total Body Irradiation (TBI) is most used as part of the conditioning regimen prior to allogeneic hematopoietic stem cell transplantation. Conventional TBI techniques have a long application time but non-conformality of beam-application with the inability to individually spare organs at risk. Our institution’s intention is to start using Volumetric Modulated Arc Therapy (VMAT) techniques to increase homogeneity of delivered radiation. As a first approach, a dosimetric plan was performed on a computed tomography (CT) scan of a Rando Alderson antropomorfic phantom (head and torso), using a set of six arcs distributed along the phantom. However, a full body anthropomorphic phantom is essential to carry out technique validation and implementation. Our aim is to define the physical and chemical characteristics and the ideal manufacturing procedure of upper and lower limbs to our anthropomorphic phantom, for later validate TBI using VMAT. Materials and Methods: To study the better fit between our phantom and limbs, a CT scan of Rando Alderson anthropomorphic phantom was acquired. CT was performed on GE Healthcare equipment (model Optima CT580 W), with slice thickness of 2.5 mm. This CT was also used to access the electronic density of soft tissue and bone through Hounsfield units (HU) analysis. Results: CT images were analyzed and measures were made for the ideal upper and lower limbs. Upper limbs should be build under the following measures: 43cm length and 7cm diameter (next to the shoulder section). Lower limbs should be build under the following measures: 79cm length and 16.5cm diameter (next to the thigh section). As expected, soft tissue and bone have very different electronic density. This is important to choose and analyze different materials to better represent soft tissue and bone characteristics. The approximate HU values of the soft tissue and for bone shall be 35HU and 250HU, respectively. Conclusion: At the moment, several compounds are being developed based on different types of resins and additives in order to be able to control and mimic the various constituent densities of the tissues. Concurrently, several manufacturing techniques are being explored to make it possible to produce the upper and lower limbs in a simple and non-expensive way, in order to finally carry out a systematic and appropriate study of the total body irradiation. This preliminary study was a good starting point to demonstrate the feasibility of TBI with VMAT.

Keywords: TBI, VMAT, anthropomorphic phantom, tissue equivalent materials

Procedia PDF Downloads 58
885 Need for a National Newborn Screening Programme in India: Pilot Study Data

Authors: Sudheer Moorkoth, Leslie Edward Lewis, Pragna Rao

Abstract:

Newborn screening (NBS) is a part of routine newborn care in many countries worldwide to detect early any rare treatable conditions and inborn errors of metabolism (IEM). India has not started this program yet. In an attempt to understand the challenges in implementing a national newborn screening program in India, we initiated a pilot newborn screening project funded by the Government of Canada. Along with initiating the newborn screening at Kasturba Hospital, Manipal in South India, for screening six disorders (Congenital Hypothyroidism(CH), Congenital Adrenal Hyperplasia (CAH), Galactosemia, Biotinidase deficiency, Glucose-6-Phosphate Dehydrogenase deficiency (G-6PD) and Phenylketonurea), we also studied the awareness of various stakeholders on the newborn screening. In a period of nine months from August 2017 to March 2018 we could screen 1915 newborns (999 male and 916 female). The result showed that there were seven babies screened positive. This interim result points to an incidence rate of 1 in 270 children for these rare disorders collectively. This includes three confirmed cases of CH, two cases of G-6PD deficiency, and one case each for Galctosemia and CAH. A questionnaire based study to understand the awareness among various stakeholders revealed that there is little awareness among parents, adolescents and anganwadi workers (public health worker). The interim data points to the need for a national newborn screening programme in India. There is also an immediate need to undertake large-scale awareness programme to create knowledge on NBS among the various stakeholders.

Keywords: awareness, inborn errors of metabolism (IEM), newborn screening, rare disease

Procedia PDF Downloads 221
884 Barriers and Opportunities for Implementing Electronic Prescription Software in Public Libyan Hospitals

Authors: Abdelbaset M. Elghriani, Abdelsalam M. Maatuk, Isam Denna, Amira Abdulla Werfalli

Abstract:

Electronic prescription software (e-prescribing) benefits patients and physicians by preventing handwriting errors and giving accurate prescriptions. E-prescribing allows prescriptions to be written and sent to pharmacies electronically instead of using handwritten notes. Significant factors that may affect the adoption of e-prescription systems include lacking technical support, financial resources to operate the systems, and change resistance from some clinicians, which have been identified as barriers to the implementation of e-prescription systems. This study aims to explore the trends and opinions of physicians and pharmacists about e-prescriptions and to identify the obstacles and benefits of the application of e-prescriptions in the health care system. A cross-sectional descriptive study was conducted at three Libyan public hospitals. Data were collected through a self-constructed questionnaire to assess the opinions regarding potential constraining factors and benefits of implementing an e-prescribing system in hospitals. Data presented as mean, frequency distribution table, cross-tabulation, and bar charts. Data analysis was performed, and the results show that technical, financial, and organizational obstacles are the most important obstacles that prevent the application of e-prescribing systems in Libyan hospitals. In addition, there was awareness of the benefits of e-prescribing, especially reducing medication dispensing errors, and a desire of physicians and pharmacists to use electronic prescriptions.

Keywords: physicians, e-prescribing, health care system, pharmacists

Procedia PDF Downloads 104
883 Ghost Frequency Noise Reduction through Displacement Deviation Analysis

Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran

Abstract:

Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.

Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality

Procedia PDF Downloads 117
882 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 103
881 Single-Crystal Kerfless 2D Array Transducer for Volumetric Medical Imaging: Theoretical Study

Authors: Jurij Tasinkiewicz

Abstract:

The aim of this work is to present a theoretical analysis of a 2D ultrasound transducer comprised of crossed arrays of metal strips placed on both sides of thin piezoelectric layer (a). Such a structure is capable of electronic beam-steering of generated wave beam both in elevation and azimuth. In this paper, a semi-analytical model of the considered transducer is developed. It is based on generalization of the well-known BIS-expansion method. Specifically, applying the electrostatic approximation, the electric field components on the surface of the layer are expanded into fast converging series of double periodic spatial harmonics with corresponding amplitudes represented by the properly chosen Legendre polynomials. The problem is reduced to numerical solving of certain system of linear equations for unknown expansion coefficients.

Keywords: beamforming, transducer array, BIS-expansion, piezoelectric layer

Procedia PDF Downloads 404
880 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple

Authors: Hasan Basaran, Emre Unal

Abstract:

Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.

Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode

Procedia PDF Downloads 79
879 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 50
878 Reducing Diagnostic Error in Australian Emergency Departments Using a Behavioural Approach

Authors: Breanna Wright, Peter Bragge

Abstract:

Diagnostic error rates in healthcare are approximately 10% of cases. Diagnostic errors can cause patient harm due to inappropriate, inadequate or delayed treatment, and such errors contribute heavily to medical liability claims globally. Therefore, addressing diagnostic error is a high priority. In most cases, diagnostic errors are the result of faulty information synthesis rather than lack of knowledge. Specifically, the majority of diagnostic errors involve cognitive factors, and in particular, cognitive biases. Emergency Departments are an environment with heightened risk of diagnostic error due to time and resource pressures, a frequently chaotic environment, and patients arriving undifferentiated and with minimal context. This project aimed to develop a behavioural, evidence-informed intervention to reduce diagnostic error in Emergency Departments through co-design with emergency physicians, insurers, researchers, hospital managers, citizens and consumer representatives. The Forum Process was utilised to address this aim. This involves convening a small (4 – 6 member) expert panel to guide a focused literature and practice review; convening of a 10 – 12 person citizens panel to gather perspectives of laypeople, including those affected by misdiagnoses; and a 18 – 22 person structured stakeholder dialogue bringing together representatives of the aforementioned stakeholder groups. The process not only provides in-depth analysis of the problem and associated behaviours, but brings together expertise and insight to facilitate identification of a behaviour change intervention. Informed by the literature and practice review, the Citizens Panel focused on eliciting the values and concerns of those affected or potentially affected by diagnostic error. Citizens were comfortable with diagnostic uncertainty if doctors were honest with them. They also emphasised the importance of open communication between doctors and patients and their families. Citizens expect more consistent standards across the state and better access for both patients and their doctors to patient health information to avoid time-consuming re-taking of long patient histories and medication regimes when re-presenting at Emergency Departments and to reduce the risk of unintentional omissions. The structured Stakeholder Dialogue focused on identifying a feasible behavioural intervention to review diagnoses in Emergency Departments. This needed to consider the role of cognitive bias in medical decision-making; contextual factors (in Victoria, there is a legislated 4-hour maximum time between ED triage and discharge / hospital admission); resource availability; and the need to ensure the intervention could work in large metropolitan as well as small rural and regional ED settings across Victoria. The identified behavioural intervention will be piloted in approximately ten hospital EDs across Victoria, Australia. This presentation will detail the findings of all review and consultation activities, describe the behavioural intervention developed and present results of the pilot trial.

Keywords: behavioural intervention, cognitive bias, decision-making, diagnostic error

Procedia PDF Downloads 105
877 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 151
876 Analysis of the Level of Production Failures by Implementing New Assembly Line

Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk

Abstract:

The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.

Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control

Procedia PDF Downloads 109
875 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 126
874 High Productivity Fed-Batch Process for Biosurfactant Production for Enhanced Oil Recovery Applications

Authors: G. A. Amin, A. D. Al-Talhi

Abstract:

The bacterium B. subtilis produced surfactin in conventional batch culture as a growth associated product and a growth rate (0.4 h-1). A fed-batch process was developed and the fermentative substrate and other nutrients were fed on hourly basis and according to the growth rate of the bacterium. Conversion of different quantities of Maldex-15 into surfactin was investigated in five different fermentation runs. In all runs, most of Maldex-15 was consumed and converted into surfactin and cell biomass with appreciable efficiencies. The best results were obtained with fermentation run supplied with 200 g Maldex-15. Up to 35.4 g.l-1 of surfactin and cell biomass of 30.2 g.l-1 were achieved in 12 hrs. Also, markedly substrate yield of 0.269 g/g and volumetric reactor productivity of 2.61 g.1-1.h-1 were obtained confirming the establishment of a cost effective commercial surfactin production.

Keywords: Bacillus subtilis, biosurfactant, exponentially fed-batch fermentation, surfactin

Procedia PDF Downloads 507
873 Analysis of Solar Thermal Power Plant in Algeria

Authors: M. Laissaoui

Abstract:

The present work has for objective the simulation of a hybrid solar combined cycle power plant, compared with combined cycle conventional (gas turbine and steam turbine), this type of power plants disposed an solar tour (heliostat field and volumetric receiver) insurant a part of the thermal energy necessary for the functioning of the gas turbine. This solar energy serves to feed with heat the combustion air of the gas turbine when he out of the compressor and the front entered the combustion chamber. The simulation of even central and made for three zones deferential to know the zone of Hassi R' mel, Bechare, and the zone of Messaad wilaya of El djelfa. The radiometric and meteorological data arise directly from the software meteonorme 7. The simulation of the energy performances is made by the software TRNSYS 16.1.

Keywords: concentrating solar power, heliostat, thermal, Algeria

Procedia PDF Downloads 447
872 A Review on Light Shafts Rendering for Indoor Scenes

Authors: Hatam H. Ali, Mohd Shahrizal Sunar, Hoshang Kolivand, Mohd Azhar Bin M. Arsad

Abstract:

Rendering light shafts is one of the important topics in computer gaming and interactive applications. The methods and models that are used to generate light shafts play crucial role to make a scene more realistic in computer graphics. This article discusses the image-based shadows and geometric-based shadows that contribute in generating volumetric shadows and light shafts, depending on ray tracing, radiosity, and ray marching technique. The main aim of this study is to provide researchers with background on a progress of light scattering methods so as to make it available for them to determine the technique best suited to their goals. It is also hoped that our classification helps researchers find solutions to the shortcomings of each method.

Keywords: shaft of lights, realistic images, image-based, and geometric-based

Procedia PDF Downloads 255
871 Volumetric Properties of Binary Mixtures of Glycerol +1-Butanol or +2-Butanol at Several Temperatures

Authors: Y. Chabouni, F. Amireche

Abstract:

Densities of glycerol + 1-butanol or 2-butanol mixtures were measured over the temperature range 293.15 to 303.15 K at atmospheric pressure, over the entire composition range, with a vibrating tube densimeter. Excess molar volumes, apparent and partial molar volumes of glycerol and butanol, thermal isobaric expansivities of the mixture and partial molar expansivities of the components were calculated. The excess molar volumes of the mixtures are negative at all temperatures, and deviations from ideality increase with increasing temperature. Excess molar volumes were fitted to the Redlich–Kister equation. Partial molar volumes of glycerol decrease with increasing butanol concentration.

Keywords: 1-Butanol, 2-Butanol, density, excess molar volume, glycerol, partial molar property, thermal isobaric expansivities

Procedia PDF Downloads 169
870 Prediction of Unsaturated Permeability Functions for Clayey Soil

Authors: F. Louati, H. Trabelsi, M. Jamei

Abstract:

Desiccation cracks following drainage-humidification cycles. With water loss, mainly due to evaporation, suction in the soil increases, producing volumetric shrinkage and tensile stress. When the tensile stress reaches tensile strength, the soil cracks. Desiccation cracks networks can directly control soil hydraulic properties. The aim of this study was for quantifying the hydraulic properties for examples the water retention curve, the saturated hydraulic conductivity, the unsaturated hydraulic conductivity function, the shrinkage dynamics in Tibar soil- clay soil in the Northern of Tunisia. Then a numerical simulation of unsaturated hydraulic properties for a crack network has been attempted. The finite elements code ‘CODE_BRIGHT’ can be used to follow the hydraulic distribution in cracked porous media.

Keywords: desiccation, cracks, permeability, unsaturated hydraulic flow, simulation

Procedia PDF Downloads 274
869 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium

Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas

Abstract:

Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.

Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides

Procedia PDF Downloads 412
868 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs

Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza

Abstract:

Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.

Keywords: basal crop coefficient, irrigation, remote sensing, SETMI

Procedia PDF Downloads 123
867 Identifying Chaotic Architecture: Origins of Nonlinear Design Theory

Authors: Mohammadsadegh Zanganehfar

Abstract:

Since the modernism, movement, and appearance of modern architecture, an aggressive desire for a general design theory in the theoretical works of architects in the form of books and essays emerges. Since Robert Venturi and Denise Scott Brown’s published complexity and contradiction in architecture in 1966, the discourse of complexity and volumetric composition has been an important and controversial issue in the discipline. Ever since various theories and essays were involved in this discourse, this paper attempt to identify chaos theory as a scientific model of complexity and its relation to architecture design theory by conducting a qualitative analysis and multidisciplinary critical approach through architecture and basic sciences resources. As a result, we identify chaotic architecture as the correlation of chaos theory and architecture as an independent nonlinear design theory with specific characteristics and properties.

Keywords: architecture complexity, chaos theory, fractals, nonlinear dynamic systems, nonlinear ontology

Procedia PDF Downloads 349
866 An Adaptive Controller Method Based on Full-State Linear Model of Variable Cycle Engine

Authors: Jia Li, Huacong Li, Xiaobao Han

Abstract:

Due to the more variable geometry parameters of VCE (variable cycle aircraft engine), presents an adaptive controller method based on the full-state linear model of VCE and has simulated to solve the multivariate controller design problem of the whole flight envelops. First, analyzes the static and dynamic performances of bypass ratio and other state parameters caused by variable geometric components, and develops nonlinear component model of VCE. Then based on the component model, through small deviation linearization of main fuel (Wf), the area of tail nozzle throat (A8) and the angle of rear bypass ejector (A163), setting up multiple linear model which variable geometric parameters can be inputs. Second, designs the adaptive controllers for VCE linear models of different nominal points. Among them, considering of modeling uncertainties and external disturbances, derives the adaptive law by lyapunov function. The simulation results showed that, the adaptive controller method based on full-state linear model used the angle of rear bypass ejector as input and effectively solved the multivariate control problems of VCE. The performance of all nominal points could track the desired closed-loop reference instructions. The adjust time was less than 1.2s, and the system overshoot was less than 1%, at the same time, the errors of steady states were less than 0.5% and the dynamic tracking errors were less than 1%. In addition, the designed controller could effectively suppress interference and reached the desired commands with different external random noise signals.

Keywords: variable cycle engine (VCE), full-state linear model, adaptive control, by-pass ratio

Procedia PDF Downloads 294
865 Comparison between Some of Robust Regression Methods with OLS Method with Application

Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq

Abstract:

The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.

Keywords: Robest, LTS, M estimate, MSE

Procedia PDF Downloads 216
864 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, prediction modeling, rail track degradation

Procedia PDF Downloads 296
863 Influence of Synergistic Modification with Tung Oil and Heat Treatment on Physicochemical Properties of Wood

Authors: Luxi He, Tianfang Zhang, Zhengbin He, Songlin Yi

Abstract:

Heat treatment has been widely recognized for its effectiveness in enhancing the physicochemical properties of wood, including hygroscopicity and dimensional stability. Nonetheless, the non-negligible volumetric shrinkage and loss of mechanical strength resulting from heat treatment may diminish the wood recovery and its product value. In this study, tung oil was used to alleviate heat-induced shrinkage and reduction in mechanical properties of wood during heat treatment. Tung oil was chosen as a modifier because it is a traditional Chinese plant oil that has been widely used for over a thousand years to protect wooden furniture and buildings due to its biodegradable and non-toxic properties. The effects of different heating media (air, tung oil) and their effective treatment parameters (temperature, duration) on the changes in the physical properties (morphological characteristics, pore structures, micromechanical properties), and chemical properties (chemical structures, chemical composition) of wood were investigated by using scanning electron microscopy, confocal laser scanning microscopy, atomic force microscopy, X-ray photoelectron spectroscopy, and dynamic vapor sorption. Meanwhile, the correlation between the mass changes and the color change, volumetric shrinkage, and hygroscopicity was also investigated. The results showed that the thermal degradation of wood cell wall components was the most important factor contributing to the changes in heat-induced shrinkage, color, and moisture adsorption of wood. In air-heat-treated wood samples, there was a significant correlation between mass change and heat-induced shrinkage, brightness, and moisture adsorption. However, the presence of impregnated tung oil in oil-heat-treated wood appears to disrupt these correlations among physical properties. The results of micromechanical properties demonstrated a significant decrease in elastic modulus following high-temperature heat treatment, which was mitigated by tung oil treatment. Chemical structure and compositional analyses indicated that the changes in chemical structure primarily stem from the degradation of hemicellulose and cellulose, and the presence of tung oil created an oxygen-insulating environment that slowed down this degradation process. Morphological observation results showed that tung oil permeated the wood structure and penetrated the cell walls through transportation channels, altering the micro-morphology of the cell wall surface, obstructing primary water passages (e.g., vessels and pits), and impeding the release of volatile degradation products as well as the infiltration and diffusion of water. In summary, tung oil treatment represents an environmentally friendly and efficient method for maximizing wood recovery and increasing product value. This approach holds significant potential for industrial applications in wood heat treatment.

Keywords: tung oil, heat treatment, physicochemical properties, wood cell walls

Procedia PDF Downloads 46
862 Numerical Investigation of Flow Behaviour Across a Trapezoidal Bluff Body at Low Reynolds Number

Authors: Zaaraoui Abdelkader, Kerfah Rabeh, Noura Belkheir, Matene Elhacene

Abstract:

The trapezoidal bluff body is a typical configuration of vortex shedding bodies. The aim of this work is to study flow behaviour over a trapezoidal cylinder at low Reynolds number. The geometry was constructed from a prototype device for measuring the volumetric flow-rate by counting vortices. Simulations were run for this geometry under steady and unsteady flow conditions using finite volume discretization. Laminar flow was investigated in this model with rigid walls and homogeneous incompressible Newtonian fluid. Calculations were performed for Reynolds number range 5 ≤ Re ≤ 180 and several flow parameters were documented. The present computations are in good agreement with the experimental observations and the numerical calculations by several investigators.

Keywords: bluff body, confined flow, numerical calculations, steady and unsteady flow, vortex shedding flow meter

Procedia PDF Downloads 260
861 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, pasta, moisture determination, food engineering

Procedia PDF Downloads 237
860 Prediction of Oxygen Transfer and Gas Hold-Up in Pneumatic Bioreactors Containing Viscous Newtonian Fluids

Authors: Caroline E. Mendes, Alberto C. Badino

Abstract:

Pneumatic reactors have been widely employed in various sectors of the chemical industry, especially where are required high heat and mass transfer rates. This study aimed to obtain correlations that allow the prediction of gas hold-up (Ԑ) and volumetric oxygen transfer coefficient (kLa), and compare these values, for three models of pneumatic reactors on two scales utilizing Newtonian fluids. Values of kLa were obtained using the dynamic pressure-step method, while  was used for a new proposed measure. Comparing the three models of reactors studied, it was observed that the mass transfer was superior to draft-tube airlift, reaching  of 0.173 and kLa of 0.00904s-1. All correlations showed good fit to the experimental data (R2≥94%), and comparisons with correlations from the literature demonstrate the need for further similar studies due to shortage of data available, mainly for airlift reactors and high viscosity fluids.

Keywords: bubble column, internal loop airlift, gas hold-up, kLa

Procedia PDF Downloads 252
859 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures

Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed

Abstract:

Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.

Keywords: bowtie analysis, health care, PPE reuse, risk management

Procedia PDF Downloads 64
858 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 140