Search results for: the COS method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18949

Search results for: the COS method

13669 Doped and Co-doped ZnO Based Nanoparticles and their Photocatalytic and Gas Sensing Property

Authors: Neha Verma, Manik Rakhra

Abstract:

Statement of the Problem: Nowadays, a tremendous increase in population and advanced industrialization augment the problems related to air and water pollutions. Growing industries promoting environmental danger, which is an alarming threat to the ecosystem. For safeguard, the environment, detection of perilous gases and release of colored wastewater is required for eutrophication pollution. Researchers around the globe are trying their best efforts to save the environment. For this remediation advanced oxidation process is used for potential applications. ZnO is an important semiconductor photocatalyst with high photocatalytic and gas sensing activities. For efficient photocatalytic and gas sensing properties, it is necessary to prepare a doped/co-doped ZnO compound to decrease the electron-hole recombination rates. However, lanthanide doped and co-doped metal oxide is seldom studied for photocatalytic and gas sensing applications. The purpose of this study is to describe the best photocatalyst for the photodegradation of dyes and gas sensing properties. Methodology & Theoretical Orientation: Economical framework has to be used for the synthesis of ZnO. In the depth literature survey, a simple combustion method is utilized for gas sensing and photocatalytic activities. Findings: Rare earth doped and co-doped ZnO nanoparticles were the best photocatalysts for photodegradation of organic dyes and different gas sensing applications by varying various factors such as pH, aging time, and different concentrations of doping and codoping metals in ZnO. Complete degradation of dye was observed only in min. Gas sensing nanodevice showed a better response and quick recovery time for doped/co-doped ZnO. Conclusion & Significance: In order to prevent air and water pollution, well crystalline ZnO nanoparticles were synthesized by rapid and economic method, which is used as photocatalyst for photodegradation of organic dyes and gas sensing applications to sense the release of hazardous gases from the environment.

Keywords: ZnO, photocatalyst, photodegradation of dye, gas sensor

Procedia PDF Downloads 155
13668 Resolution Method for Unforeseen Ground Condition Problem Case in Coal Fired Steam Power Plant Project Location Adipala, Indonesia

Authors: Andi Fallahi, Bona Ryan Situmeang

Abstract:

The Construction Industry is notoriously risky. Much of the preparatory paperwork that precedes construction project can be viewed as the formulation of risk allocation between the owner and the Contractor. The Owner is taking the risk that his project will not get built on the schedule that it will not get built for what he has budgeted and that it will not be of the quality he expected. The Contractor Face a multitude of risk. One of them is an unforeseen condition at the construction site. The Owner usually has the upper hand here if the unforeseen condition occurred. Site data contained in Ground Investigation report is often of significant contractual importance in disputes related to the unforeseen ground condition. A ground investigation can never fully disclose all the details of the underground condition (Risk of an unknown ground condition can never be 100% eliminated). Adipala Coal Fired Steam Power Plant (CSFPP) 1 x 660 project is one of the large CSFPP project in Indonesia based on Engineering, Procurement, and Construction (EPC) Contract. Unforeseen Ground Condition it’s responsible by the Contractor has stipulated in the clausal of Contract. In the implementation, there’s indicated unforeseen ground condition at Circulating Water Pump House (CWPH) area which caused the Contractor should be changed the Method of Work that give big impact against Time of Completion and Cost Project. This paper tries to analyze the best way for allocating the risk between The Owner and The Contractor. All parties that allocating of sharing risk fairly can ultimately save time and money for all parties, and get the job done on schedule for the least overall cost.

Keywords: unforeseen ground condition, coal fired steam power plant, circulating water pump house, Indonesia

Procedia PDF Downloads 328
13667 Pyrolysis of Dursunbey Lignite and Pyrolysis Kinetics

Authors: H. Sütçü, C. Efe

Abstract:

In this study, pyrolysis characteristics of Dursunbey-Balıkesir lignite and its pyrolysis kinetics are examined. The pyrolysis experiments carried out at three different heating rates are performed by using thermogravimetric method. Kinetic parameters are calculated by Coats & Redfern kinetic model and the degree of pyrolysis process is determined for each of the heating rate.

Keywords: lignite, thermogravimetric analysis, pyrolysis, kinetics

Procedia PDF Downloads 367
13666 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique

Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak

Abstract:

The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.

Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method

Procedia PDF Downloads 179
13665 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 208
13664 Important Factors Affecting the Effectiveness of Quality Control Circles

Authors: Sogol Zarafshan

Abstract:

The present study aimed to identify important factors affecting the effectiveness of quality control circles in a hospital, as well as rank them using a combination of fuzzy VIKOR and Grey Relational Analysis (GRA). The study population consisted of five academic members and five experts in the field of nursing working in a hospital, who were selected using a purposive sampling method. Also, a sample of 107 nurses was selected through a simple random sampling method using their employee codes and the random-number table. The required data were collected using a researcher-made questionnaire which consisted of 12 factors. The validity of this questionnaire was confirmed through giving the opinions of experts and academic members who participated in the present study, as well as performing confirmatory factor analysis. Its reliability also was verified (α=0.796). The collected data were analyzed using SPSS 22.0 and LISREL 8.8, as well as VIKOR–GRA and IPA methods. The results of ranking the factors affecting the effectiveness of quality control circles showed that the highest and lowest ranks were related to ‘Managers’ and supervisors’ support’ and ‘Group leadership’. Also, the highest hospital performance was for factors such as ‘Clear goals and objectives’ and ‘Group cohesiveness and homogeneity’, and the lowest for ‘Reward system’ and ‘Feedback system’, respectively. The results showed that although ‘Training the members’, ‘Using the right tools’ and ‘Reward system’ were factors that were of great importance, the organization’s performance for these factors was poor. Therefore, these factors should be paid more attention by the studied hospital managers and should be improved as soon as possible.

Keywords: Quality control circles, Fuzzy VIKOR, Grey Relational Analysis, Importance–Performance Analysis

Procedia PDF Downloads 136
13663 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving

Procedia PDF Downloads 184
13662 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 453
13661 Optimum Dewatering Network Design Using Firefly Optimization Algorithm

Authors: S. M. Javad Davoodi, Mojtaba Shourian

Abstract:

Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.

Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm

Procedia PDF Downloads 294
13660 Some Extreme Halophilic Microorganisms Produce Extracellular Proteases with Long Lasting Tolerance to Ethanol Exposition

Authors: Cynthia G. Esquerre, Amparo Iris Zavaleta

Abstract:

Extremophiles constitute a potentially valuable source of proteases for the development of biotechnological processes; however, the number of available studies in the literature is limited compared to mesophilic counterparts. Therefore, in this study, Peruvian halophilic microorganisms were characterized to select suitable proteolytic strains that produce active proteases under exigent conditions. Proteolysis was screened using the streak plate method with gelatin or skim milk as substrates. After that, proteolytic microorganisms were selected for phenotypic characterization and screened by a semi-quantitative proteolytic test using a modified method of diffusion agar. Finally, proteolysis was evaluated using partially purified extracts by ice-cold ethanol precipitation and dialysis. All analyses were carried out over a wide range of NaCl concentrations, pH, temperature and substrates. Of a total of 60 strains, 21 proteolytic strains were selected, of these 19 were extreme halophiles and 2 were moderates. Most proteolytic strains demonstrated differences in their biochemical patterns, particularly in sugar fermentation. A total of 14 microorganisms produced extracellular proteases, 13 were neutral, and one was alkaline showing activity up to pH 9.0. Proteases hydrolyzed gelatin as the most specific substrate. In general, catalytic activity was efficient under a wide range of NaCl (1 to 4 M NaCl), temperature (37 to 55 °C) and after an ethanol exposition performed at -20 °C for 24 hours. In conclusion, this study reported 14 candidates extremely halophiles producing extracellular proteases capable of being stable and active on a wide range of NaCl, temperature and even long lasting ethanol exposition.

Keywords: biotechnological processes, ethanol exposition, extracellular proteases, extremophiles

Procedia PDF Downloads 285
13659 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 228
13658 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 274
13657 Research of Seepage Field and Slope Stability Considering Heterogeneous Characteristics of Waste Piles: A Less Costly Way to Reduce High Leachate Levels and Avoid Accidents

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Due to the characteristics of high-heap and large-volume, the complex layers of waste and the high-water level of leachate, environmental pollution, and slope instability are easily produced. It is therefore of great significance to research the heterogeneous seepage field and stability of landfills. This paper focuses on the heterogeneous characteristics of the landfill piles and analyzes the seepage field and slope stability of the landfill using statistical and numerical analysis methods. The calculated results are compared with the field measurement and literature research data to verify the reliability of the model, which may provide the basis for the design, safe, and eco-friendly operation of the landfill. The main innovations are as follows: (1) The saturated-unsaturated seepage equation of heterogeneous soil is derived theoretically. The heterogeneous landfill is regarded as composed of infinite layers of homogeneous waste, and a method for establishing the heterogeneous seepage model is proposed. Then the formation law of the stagnant water level of heterogeneous landfills is studied. It is found that the maximum stagnant water level of landfills is higher when considering the heterogeneous seepage characteristics, which harms the stability of landfills. (2) Considering the heterogeneity weight and strength characteristics of waste, a method of establishing a heterogeneous stability model is proposed, and it is extended to the three-dimensional stability study. It is found that the distribution of heterogeneous characteristics has a great influence on the stability of landfill slope. During the operation and management of the landfill, the reservoir bank should also be considered while considering the capacity of the landfill.

Keywords: heterogeneous characteristics, leachate levels, saturated-unsaturated seepage, seepage field, slope stability

Procedia PDF Downloads 252
13656 Reaching a Mobile and Dynamic Nose after Rhinoplasty: A Pilot Study

Authors: Guncel Ozturk

Abstract:

Background: Rhinoplasty is the most commonly performed cosmetic operations in plastic surgery. Maneuvers used in rhinoplasty lead to a firm and stiff nasal tip in the early postoperative months. This unnatural stability of the nose may easily cause distortion in the reshaped nose after severe trauma. Moreover, a firm nasal tip may cause difficulties in performing activities such as touching, hugging, or kissing. Decreasing the stability and increasing the mobility of the nasal tip would help rhinoplasty patients to avoid these small but relatively important problems. Methods: We use delivery approach with closed rhinoplasty and changed positions of intranasal incisions to reach a dynamic and mobile nose. A total of 203 patients who had undergone primary closed rhinoplasty in private practice were inspected retrospectively. Posterior strut flap that was connected with connective tissues in the caudal of septum and the medial crurals were formed. Cartilage of the posterior strut graft was left 2 mm thick in the distal part of septum, it was cut vertically, and the connective tissue in the distal part was preserved. Results: The median patient age was 24 (range 17-42) years. The median follow-up period was15.2 (range12-26) months. Patient satisfaction was assessed with the 'Rhinoplasty Outcome Evaluation' (ROE) questionnaire. Twelve months after surgeries, 87.5% of patients reported excellent outcomes, according to ROE. Conclusion: The soft tissue connections between that segment and surrounding structures should be preserved to save the support of the tip while having a mobile tip at the same time with this method. These modifications would access to a mobile, non-stiff, and dynamic nasal tip in the early postoperative months. Further and prospective studies should be performed for supporting this method.

Keywords: closed rhinoplasty, dynamic, mobile, tip

Procedia PDF Downloads 133
13655 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 440
13654 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 91
13653 Spray Drying: An Innovative and Sustainable Method of Preserving Fruits

Authors: Adepoju Abiola Lydia, Adeyanju James Abiodun, Abioye A. O.

Abstract:

Spray drying, an innovative and sustainable preservation method, is increasingly gaining recognition for its potential to enhance food security by extending the shelf life of fruits. This technique involves the atomization of fruit pulp into fine droplets, followed by rapid drying with hot air, resulting in a powdered product that retains much of the original fruit's nutritional value, flavor, and color. By encapsulating sensitive bioactive compounds within a dry matrix, spray drying mitigates nutrient degradation and extends product usability. This technology aligns with sustainability goals by reducing post-harvest losses, minimizing the need for preservatives, and lowering energy consumption compared to conventional drying methods. Furthermore, spray drying enables the use of imperfect or surplus fruits, contributing to waste reduction and providing a continuous supply of nutritious fruit-based ingredients regardless of seasonal variations. The powdered form enhances versatility, allowing incorporation into various food products, thus broadening the scope of fruit utilization. Innovations in spray drying, such as the use of novel carrier agents and optimization of processing parameters, enhance the quality and functionality of the final product. Moreover, the scalability of spray drying makes it suitable for both industrial applications and smaller-scale operations, supporting local economies and food systems. In conclusion, spray drying stands out as a key technology in enhancing food security by ensuring a stable supply of high-quality, nutritious food ingredients while fostering sustainable agricultural practices.

Keywords: spray drying, sustainable, process parameters, carrier agents, fruits

Procedia PDF Downloads 25
13652 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 97
13651 Telemedicine in Physician Assistant Education: A Partnership with Community Agency

Authors: Martina I. Reinhold, Theresa Bacon-Baguley

Abstract:

A core challenge of physician assistant education is preparing professionals for lifelong learning. While this conventionally has encompassed scientific advances, students must also embrace new care delivery models and technologies. Telemedicine, the provision of care via two-way audio and video, is an example of a technological advance reforming health care. During a three-semester sequence of Hospital Community Experiences, physician assistant students were assigned experiences with Answer Health on Demand, a telemedicine collaborative. Preceding the experiences, the agency lectured on the application of telemedicine. Students were then introduced to the technology and partnered with a provider. Prior to observing the patient-provider interaction, patient consent was obtained. Afterwards, students completed a reflection paper on lessons learned and the potential impact of telemedicine on their careers. Thematic analysis was completed on the students’ reflection papers (n=13). Preceding the lecture and experience, over 75% of students (10/13) were unaware of telemedicine. Several stated they were 'skeptical' about the effectiveness of 'impersonal' health care appointments. After the experience, all students remarked that telemedicine will play a large role in the future of healthcare and will provide benefits by improving access in rural areas, decreasing wait time, and saving cost. More importantly, 30% of students (4/13) commented that telemedicine is a technology they can see themselves using in their future practice. Initial results indicate that collaborative interaction between students and telemedicine providers enhanced student learning and exposed students to technological advances in the delivery of care. Further, results indicate that students perceived telemedicine more favorably as a viable delivery method after the experience.

Keywords: collaboration, physician assistant education, teaching innovative health care delivery method, telemedicine

Procedia PDF Downloads 197
13650 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints

Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu

Abstract:

Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.

Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning

Procedia PDF Downloads 54
13649 Effects of Ultraviolet Treatment on Microbiological Load and Phenolic Content of Vegetable Juice

Authors: Kubra Dogan, Fatih Tornuk

Abstract:

Due to increasing consumer demand for the high-quality food products and awareness regarding the health benefits of different nutrients in food minimal processing becomes more popular in modern food preservation. To date, heat treatment is often used for inactivation of spoilage microorganisms in foods. However, it may cause significant changes in the quality and nutritional properties of food. In order to overcome the detrimental effects of heat treatment, several alternatives of non-thermal microbial inactivation processes have been investigated. Ultraviolet (UV) inactivation is a promising and feasible method for better quality and longer shelf life as an alternative to heat treatment, which aims to inhibit spoilage and pathogenic microorganisms and to inactivate the enzymes in vegetable juice production. UV-C is a sub-class of UV treatment which shows the highest microcidal effect between 250-270 nm. The wavelength of 254 nm is used for the surface disinfection of certain liquid food products such as vegetable juice. Effects of UV-C treatment on microbiological load and quality parameter of vegetable juice which is a mix of celery, carrot, lemon and orange was investigated. Our results showed that storing of UV-C applied vegetable juice for three months, reduced the count of TMAB by 3.5 log cfu/g and yeast-mold by 2 log cfu/g compared to control sample. Total phenolic content was found to be 514.3 ± 0.6 mg gallic acid equivalent/L, and there wasn’t a significant difference compared to control. The present work suggests that UV-C treatment is an alternative method for disinfection of vegetable juice since it enables adequate microbial inactivation, longer shelf life and has minimal effect on degradation of quality parameters of vegetable juice.

Keywords: heat treatment, phenolic content, shelf life, ultraviolet (UV-C), vegetable juice

Procedia PDF Downloads 210
13648 The Design Method of Artificial Intelligence Learning Picture: A Case Study of DCAI's New Teaching

Authors: Weichen Chang

Abstract:

To create a guided teaching method for AI generative drawing design, this paper develops a set of teaching models for AI generative drawing (DCAI), which combines learning modes such as problem-solving, thematic inquiry, phenomenon-based, task-oriented, and DFC . Through the information security AI picture book learning guided programs and content, the application of participatory action research (PAR) and interview methods to explore the dual knowledge of Context and ChatGPT (DCAI) for AI to guide the development of students' AI learning skills. In the interviews, the students highlighted five main learning outcomes (self-study, critical thinking, knowledge generation, cognitive development, and presentation of work) as well as the challenges of implementing the model. Through the use of DCAI, students will enhance their consensus awareness of generative mapping analysis and group cooperation, and they will have knowledge that can enhance AI capabilities in DCAI inquiry and future life. From this paper, it is found that the conclusions are (1) The good use of DCAI can assist students in exploring the value of their knowledge through the power of stories and finding the meaning of knowledge communication; (2) Analyze the transformation power of the integrity and coherence of the story through the context so as to achieve the tension of ‘starting and ending’; (3) Use ChatGPT to extract inspiration, arrange story compositions, and make prompts that can communicate with people and convey emotions. Therefore, new knowledge construction methods will be one of the effective methods for AI learning in the face of artificial intelligence, providing new thinking and new expressions for interdisciplinary design and design education practice.

Keywords: artificial intelligence, task-oriented, contextualization, design education

Procedia PDF Downloads 30
13647 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 206
13646 Agrowastes to Edible Hydrogels through Bio Nanotechnology Interventions: Bioactive from Mandarin Peels

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits contain an abundance of phytochemicals that can promote health. A substantial amount of agrowaste is produced from the juice processing industries, primarily peels and seeds. This leftover agrowaste is a reservoir of nutraceuticals, particularly bioflavonoids which render it antioxidant and potentially anticancerous. It is, therefore, favorable to utilize this biomass and contribute towards sustainability in a manner that value-added products may be derived from them, nutraceuticals, in this study. However, the pre-systemic metabolism of flavonoids in the gastric phase limits the effectiveness of these bioflavonoids derived from mandarin biomass. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was explored for its flavonoid profile. This work entails supercritical fluid extraction and identification of bioflavonoids from mandarin biomass. Furthermore, to overcome the limitations of these flavonoids in the gastrointestinal tract, a double-layered vehicular mechanism comprising the fabrication of nanoconjugates and edible hydrogels was adopted. Total flavonoids in the mandarin peel extract were estimated by the aluminum chloride complexation method and were found to be 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the abundance of polymethoxyflavones (PMFs), nobiletin and tangeretin as the major flavonoids in the extract, followed by hesperetin and naringenin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which showed an IC50 of 0.55μg/ml. Nanoconjugates were fabricated via the solvent evaporation method, which was further impregnated into hydrogels. Additionally, the release characteristics of nanoconjugate-laden hydrogels in a simulated gastrointestinal environment were studied. The PLGA-PMFs nanoconjugates exhibited a particle size between 200-250nm having a smooth and spherical shape as revealed by FE-SEM. The impregnated alginate hydrogels offered a dense network that ensured the holding of PLGA-PMF nanoconjugates, as confirmed by Cryo-SEM images. Rheological studies revealed the shear-thinning behavior of hydrogels and their high resistance to deformation. Gastrointestinal studies showed a negligible 4.0% release of flavonoids in the gastric phase, followed by a sustained release over the next hours in the intestinal environment. Therefore, based on the enormous potential of recovering nutraceuticals from agro-processing wastes, further augmented by nanotechnological interventions for enhancing the bioefficacy of these compounds, lays the foundation for exploring the path towards the development of value-added products, thereby contributing towards the sustainable use of agrowaste.

Keywords: agrowaste, gastrointestinal, hydrogel, nutraceuticals

Procedia PDF Downloads 93
13645 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 366
13644 Bulk Transport in Strongly Correlated Topological Insulator Samarium Hexaboride Using Hall Effect and Inverted Resistance Methods

Authors: Alexa Rakoski, Yun Suk Eo, Cagliyan Kurdak, Priscila F. S. Rosa, Zachary Fisk, Monica Ciomaga Hatnean, Geetha Balakrishnan, Boyoun Kang, Myungsuk Song, Byungki Cho

Abstract:

Samarium hexaboride (SmB6) is a strongly correlated mixed valence material and Kondo insulator. In the resistance-temperature curve, SmB6 exhibits activated behavior from 4-40 K after the Kondo gap forms. However, below 4 K, the resistivity is temperature independent or weakly temperature dependent due to the appearance of a topologically protected surface state. Current research suggests that the surface of SmB6 is conductive while the bulk is truly insulating, different from conventional 3D TIs (Topological Insulators) like Bi₂Se₃ which are plagued by bulk conduction due to impurities. To better understand why the bulk of SmB6 is so different from conventional TIs, this study employed a new method, called inverted resistance, to explore the lowest temperatures, as well as standard Hall measurements for the rest of the temperature range. In the inverted resistance method, current flows from an inner contact to an outer ring, and voltage is measured outside of this outer ring. This geometry confines the surface current and allows for measurement of the bulk resistivity even when the conductive surface dominates transport (below 4 K). The results confirm that the bulk of SmB6 is truly insulating down to 2 K. Hall measurements on a number of samples show consistent bulk behavior from 4-40 K, but widely varying behavior among samples above 40 K. This is attributed to a combination of the growth process and purity of the starting material, and the relationship between the high and low temperature behaviors is still being explored.

Keywords: bulk transport, Hall effect, inverted resistance, Kondo insulator, samarium hexaboride, topological insulator

Procedia PDF Downloads 160
13643 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 160
13642 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 78
13641 Impact of Ethnoscience-Based Teaching Approach: Thinking Relevance, Effectiveness and Learner Retention in Physics Concepts of Optics

Authors: Rose C.Anamezie, Mishack T. Gumbo

Abstract:

Physics learners’ poor retention, which culminates in poor achievement due to teaching approaches that are unrelated to learners’ in non-Western cultures, warranted the study. The tenet of this study was to determine the effectiveness of the ethnoscience-based teaching (EBT) approach on learners’ retention in the Physics concept of Optics in the Awka Education zone of Anambra State- Nigeria. Two research questions and three null hypotheses tested at a 0.05 level of significance guided the study. The design adopted for the study was Quasi-experimental. Specifically, a non-equivalent control group design was adopted. The population for the study was 4,825 SS2 Physics learners in the zone. 160 SS2 learners were sampled using purposive and random sampling. The experimental group was taught rectilinear propagation of light (RPL) using the EBT approach, while the control group was taught the same topic using the lecture method. The instrument for data collection was the 50 Physics Retention Test (PRT) which was validated by three experts and tested for reliability using Kuder-Richardson’s formula-20, which yielded coefficients of 0.81. The data were analysed using mean, standard deviation and analysis of co-variance (p< .05). The results showed higher retention for the use of the EBT approach than the lecture method, while there was no significant gender-based factor in the learners’ retention in Physics. It was recommended that the EBT approach, which bridged the gender gap in Physics retention, be adopted in secondary school teaching and learning since it could transform science teaching, enhance learners’ construction of new science concepts based on their existing knowledge and bridge the gap between Western science and learners’ worldviews.

Keywords: Ethnoscience-based teaching, optics, rectilinear propagation of light, retention

Procedia PDF Downloads 83
13640 Sea Protection: Using Marine Algae as a Natural Method of Absorbing Dye Textile Waste

Authors: Ariana Kilic, Serena Arapyan

Abstract:

Water pollution is a serious concern in all seas around the world and one major cause of it is dye textile wastes mixing with seawater. This common incident alters aquatic life, putting organisms’ lives in danger and deteriorating the water's nature. There is a significant need for a natural approach to reduce the amount of dye textile waste in seawater and ensure marine organisms' safety. Consequently, using marine algae is a viable solution since it can eliminate the excess waste by absorbing the dye. Also, marine algae are non-vascular that absorb water and nutrients, meaning that having them as absorbers is a natural process and no inorganic matters will be added to the seawater that could result in further pollution. To test the efficiency of this approach, the optical absorbance of the seawater samples was measured before and after the addition of marine algae by utilizing colorimetry. A colorimeter is used to find the concentration of a chemical compound in a solution by measuring the absorbance of the compound at a specific wavelength. Samples of seawater that have equal amounts of water were used and textile dye was added as the constant variables. The initial and final absorbances, the dependent variable, of the water were measured before and after the addition of marine algae, the independent variable, and observed. The lower the absorbance showed us that there is lower dye concentration and therefore, the marine algae had done its job by using and absorbing the dye. The same experiment was repeated with same amount of water but with different concentrations of dye in order to determine the maximum concentration of dye the marine algae can completely absorb. The diminished concentration of dye demonstrated that pollution caused by factories’ dye wastes could be prevented with the natural method of marine algae. The involvement of marine algae is an optimal strategy for having an organic solution to absorbing the dye wastes in seas and obstructing water pollution.

Keywords: water pollution, dye textile waste, marine algae, absorbance, colorimetry

Procedia PDF Downloads 22