Search results for: advanced encryption standard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7278

Search results for: advanced encryption standard

6288 Advanced Exergetic Analysis: Decomposition Method Applied to a Membrane-Based Hard Coal Oxyfuel Power Plant

Authors: Renzo Castillo, George Tsatsaronis

Abstract:

High-temperature ceramic membranes for air separation represents an important option to reduce the significant efficiency drops incurred in state-of-the-art cryogenic air separation for high tonnage oxygen production required in oxyfuel power stations. This study is focused on the thermodynamic analysis of two power plant model designs: the state-of-the-art supercritical 600ᵒC hard coal plant (reference power plant Nordrhein-Westfalen) and the membrane-based oxyfuel concept implemented in this reference plant. In the latter case, the oxygen is separated through a mixed-conducting hollow fiber perovskite membrane unit in the three-end operation mode, which has been simulated under vacuum conditions on the permeate side and at high-pressure conditions on the feed side. The thermodynamic performance of each plant concept is assessed by conventional exergetic analysis, which determines location, magnitude and sources of efficiency losses, and advanced exergetic analysis, where endogenous/exogenous and avoidable/unavoidable parts of exergy destruction are calculated at the component and full process level. These calculations identify thermodynamic interdependencies among components and reveal the real potential for efficiency improvements. The endogenous and exogenous exergy destruction portions are calculated by the decomposition method, a recently developed straightforward methodology, which is suitable for complex power stations with a large number of process components. Lastly, an improvement priority ranking for relevant components, as well as suggested changes in process layouts are presented for both power stations.

Keywords: exergy, carbon capture and storage, ceramic membranes, perovskite, oxyfuel combustion

Procedia PDF Downloads 185
6287 Prediction Study of a Corroded Pressure Vessel Using Evaluation Measurements and Finite Element Analysis

Authors: Ganbat Danaa, Chuluundorj Puntsag

Abstract:

The steel structures of the Oyu-Tolgoi mining Concentrator plant are corroded during operation, which raises doubts about the continued use of some important structures of the plant, which is one of the problems facing the plant's regular operation. As a part of the main operation of the plant, the bottom part of the pressure vessel, which plays an important role in the reliable operation of the concentrate filter-drying unit, was heavily corroded, so it was necessary to study by engineering calculations, modeling, and simulation using modern advanced engineering programs and methods. The purpose of this research is to investigate whether the corroded part of the pressure vessel can be used normally in the future using advanced engineering software and to predetermine the remaining life of the time of the pressure vessel based on engineering calculations. When the thickness of the bottom part of the pressure vessel was thinned by 0.5mm due to corrosion detected by non-destructive testing, finite element analysis using ANSYS WorkBench software was used to determine the mechanical stress, strain and safety factor in the wall and bottom of the pressure vessel operating under 2.2 MPa working pressure, made conclusions on whether it can be used in the future. According to the recommendations, by using sand-blast cleaning and anti-corrosion paint, the normal, continuous and reliable operation of the Concentrator plant can be ensured, such as ordering new pressure vessels and reducing the installation period. By completing this research work, it will be used as a benchmark for assessing the corrosion condition of steel parts of pressure vessels and other metallic and non-metallic structures operating under severe conditions of corrosion, static and dynamic loads, and other deformed steels to make analysis of the structures and make it possible to evaluate and control the integrity and reliable operation of the structures.

Keywords: corrosion, non-destructive testing, finite element analysis, safety factor, structural reliability

Procedia PDF Downloads 67
6286 Development of Lodging Business Management Standards of Bang Khonthi Community in Samut Songkram Province

Authors: Poramet Saeng-On

Abstract:

This research aims to develop ways of lodging business management of Bang Khonthi community in Samut Songkram province that are appropriate with the cultural context of the Bang Khonthi community. Eight lodging business owners were interviewed. It was found that lodging business that are family business must be done with passion, correct understanding of self, culture, nature, Thai way of life, thorough, professional development, environmentally concerned, building partnerships with various networks both community level, and public sector and business cohorts. Public relations should be done through media both traditional and modern outlets, such as websites and social networks to provide customers convenience, security, happiness, knowledge, love and value when travel to Bang Khonthi. This will also help them achieve sustainability in business, in line with the 10 Home Stay Standard Thailand. Suggestions for operators are as follows: Operators need to improve their public relations work. They need to use technology in public relations such as the internet. Management standards must be improved. Souvenir and local products shops should be arranged in the compound. Product pricing must be set accordingly. They need to join hands to help each other. Quality of the business operation should be raised to meet the standards. Educational measures to reduce the impact caused by tourism on the community such as efforts to reduce energy consumption.

Keywords: homestay, lodging business, management, standard

Procedia PDF Downloads 449
6285 Optimization of Hemp Fiber Reinforced Concrete for Various Environmental Conditions

Authors: Zoe Chang, Max Williams, Gautham Das

Abstract:

The purpose of this study is to evaluate the incorporation of hemp fibers (HF) in concrete. Hemp fiber reinforced concrete (HFRC) is becoming more popular as an alternative for regular mix designs. This study was done to evaluate the compressive strength of HFRC regarding mix procedure. Hemp fibers were obtained from the manufacturer and hand-processed to ensure uniformity in width and length. The fibers were added to the concrete as both wet and dry mixes to investigate and optimize the mix design process. Results indicated that the dry mix had a compressive strength of 1157 psi compared to the wet mix of 985 psi. This dry mix compressive strength was within range of the standard mix compressive strength of 1533 psi. The statistical analysis revealed that the mix design process needs further optimization and uniformity concerning the addition of HF. Regression analysis revealed the standard mix design had a coefficient of 0.9 as compared to the dry mix of 0.375, indicating a variation in the mixing process. While completing the dry mix, the addition of plain hemp fibers caused them to intertwine, creating lumps and inconsistency. However, during the wet mixing process, combining water and hemp fibers before incorporation allows the fibers to uniformly disperse within the mix; hence the regression analysis indicated a better coefficient of 0.55. This study concludes that HRFC is a viable alternative to regular mixes; however, more research surrounding its characteristics needs to be conducted.

Keywords: hemp fibers, hemp reinforced concrete, wet & dry, freeze thaw testing, compressive strength

Procedia PDF Downloads 200
6284 Comparison of the Yumul Faces Anxiety Scale to the Categorization Scale, the Numerical Verbal Rating Scale, and the State-Trait Anxiety Inventory for Preoperative Anxiety Evaluation

Authors: Ofelia Loani Elvir Lazo, Roya Yumul, David Chernobylsky, Omar Durra

Abstract:

Background: It is crucial to detect the patient’s existing anxiety to assist patients in a perioperative setting which is to be caused by the fear associated with surgical and anesthetic complications. However, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting, given the duration and concentration required to complete the 40-item questionnaire. Our primary aim in the study is to investigate the correlation of the Yumul Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparative analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusıons: Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul Faces Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul Faces Anxiety Scale are merited.

Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale

Procedia PDF Downloads 117
6283 Investigation of the Corroded Steel Beam

Authors: Hesamaddin Khoshnoodi, Ahmad Rahbar Ranji

Abstract:

Corrosion in steel structures is one of the most important issues that should be considered in designing and constructing. Corrosion reduces the cross section and load capacity of element and leads to costly damage of structures. In this paper, the corrosion has been modeled for moment stresses. Moreover, the steel beam has been modeled using ABAQUS advanced finite element software. The conclusions of this study demonstrated that the displacement of the analyzed composite steel girder bridge might increase.

Keywords: Abaqus, Corrosion, deformation, Steel Beam

Procedia PDF Downloads 354
6282 Network Word Discovery Framework Based on Sentence Semantic Vector Similarity

Authors: Ganfeng Yu, Yuefeng Ma, Shanliang Yang

Abstract:

The word discovery is a key problem in text information retrieval technology. Methods in new word discovery tend to be closely related to words because they generally obtain new word results by analyzing words. With the popularity of social networks, individual netizens and online self-media have generated various network texts for the convenience of online life, including network words that are far from standard Chinese expression. How detect network words is one of the important goals in the field of text information retrieval today. In this paper, we integrate the word embedding model and clustering methods to propose a network word discovery framework based on sentence semantic similarity (S³-NWD) to detect network words effectively from the corpus. This framework constructs sentence semantic vectors through a distributed representation model, uses the similarity of sentence semantic vectors to determine the semantic relationship between sentences, and finally realizes network word discovery by the meaning of semantic replacement between sentences. The experiment verifies that the framework not only completes the rapid discovery of network words but also realizes the standard word meaning of the discovery of network words, which reflects the effectiveness of our work.

Keywords: text information retrieval, natural language processing, new word discovery, information extraction

Procedia PDF Downloads 95
6281 Comparative Analysis of Various Waste Oils for Biodiesel Production

Authors: Olusegun Ayodeji Olagunju, Christine Tyreesa Pillay

Abstract:

Biodiesel from waste sources is regarded as an economical and most viable fuel alternative to depleting fossil fuels. In this work, biodiesel was produced from three different sources of waste cooking oil; from cafeterias, which is vegetable-based using the transesterification method. The free fatty acids (% FFA) of the feedstocks were conducted successfully through the titration method. The results for sources 1, 2, and 3 were 0.86 %, 0.54 % and 0.20 %, respectively. The three variables considered in this process were temperature, reaction time, and catalyst concentration within the following range: 50 oC – 70 oC, 30 min – 90 min, and 0.5 % – 1.5 % catalyst. Produced biodiesel was characterized using ASTM standard methods for biodiesel property testing to determine the fuel properties, including kinematic viscosity, specific gravity, flash point, pour point, cloud point, and acid number. The results obtained indicate that the biodiesel yield from source 3 was greater than the other sources. All produced biodiesel fuel properties are within the standard biodiesel fuel specifications ASTM D6751. The optimum yield of biodiesel was obtained at 98.76%, 96.4%, and 94.53% from source 3, source 2, and source 1, respectively at optimum operating variables of 65 oC temperature, 90 minutes reaction time, and 0.5 wt% potassium hydroxide.

Keywords: waste cooking oil, biodiesel, free fatty acid content, potassium hydroxide catalyst, optimization analysis

Procedia PDF Downloads 77
6280 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 209
6279 Analyzing Students' Writing in an English Code-Mixing Context in Nepali: An Ecological and Systematic Functional Approach

Authors: Binod Duwadi

Abstract:

This article examines the language and literacy practices of English Code-mixing in Nepalese Classroom. Situating the study within an ecological framework, a systematic functional linguistic (SFL) approach was used to analyze students writing in two Neplease schools. Data collection included interviews with teachers, classroom observations, instructional materials, and focal students’ writing samples. Data analyses revealed vastly different language ecologies between the schools owing to sharp socioeconomic stratification, the structural organization of schools, and the pervasiveness of standard language ideology, with stigmatizes English code mixing (ECM) and privileges Standard English in schools. Functional analysis of students’ writing showed that the nature of the writing tasks at the schools created different affordances for exploiting lexicogrammatically choices for meaning making-enhancing them in the case of one school but severely restricting them in the case of another- perpetuating the academic disadvantage for code mixing speakers. Recommendations for structural and attitudinal changes through teacher training and implementation of approaches that engage students’ bidialectal competence for learning are made as important first steps towards addressing educational inequities in Nepalese schools.

Keywords: code-mixing, ecological perspective, systematic functional approach, language and identity

Procedia PDF Downloads 124
6278 Assessment of Sperm Aneuploidy Using Advanced Sperm Fish Technique in Infertile Patients

Authors: Archana S., Usha Rani G., Anand Balakrishnan, Sanjana R., Solomon F., Vijayalakshmi J.

Abstract:

Background: There is evidence that male factors contribute to the infertility of up to 50% of couples, who are evaluated and treated for infertility using advanced assisted reproductive technologies. Genetic abnormalities, including sperm chromosome aneuploidy as well as structural aberrations, are one of the major causes of male infertility. Recent advances in technology expedite the evaluation of sperm aneuploidy. The purpose of the study was to de-termine the prevalence of sperm aneuploidy in infertile males and the degree of association between DNA fragmentation and sperm aneuploidy. Methods: In this study, 75 infertile men were included, and they were divided into four abnormal groups (Oligospermia, Terato-spermia, Asthenospermia and Oligoasthenoteratospermia (OAT)). Men with children who were normozoospermia served as the control group. The Fluorescence in situ hybridization (FISH) method was used to test for sperm aneuploidy, and the Sperm Chromatin Dispersion Assay (SCDA) was used to measure the fragmentation of sperm DNA. Spearman's correla-tion coefficient was used to evaluate the relationship between sperm aneuploidy and sperm DNA fragmentation along with age. P < 0.05 was regarded as significant. Results: 75 partic-ipants' ages varied from 28 to 48 years old (35.5±5.1). The percentage of spermatozoa bear-ing X and Y was determined to be statistically significant (p-value < 0.05) and was found to be 48.92% and 51.18% of CEP X X 1 – nucish (CEP XX 1) [100] and CEP Y X 1 – nucish (CEP Y X 1) [100]. When compared to the rate of DNA fragmentation, it was discovered that infertile males had a greater frequency of sperm aneuploidy. Asthenospermia and OAT groups in sex chromosomal aneuploidy were significantly correlated (p<0.05). Conclusion: Sperm FISH and SCDA assay results showed increased sperm aneuploidy frequency, and DNA fragmentation index in infertile men compared with fertile men. There is a significant relationship observed between sperm aneuploidy and DNA fragmentation in OAT patients. When evaluating male variables and idiopathic infertility, the sperm FISH screening method can be used as a valuable diagnostic tool.

Keywords: ale infertility, dfi (dna fragmentation assay) (scd-sperm chromatin dispersion).art (artificial reproductive technology), trisomy, aneuploidy, fish (fluorescence in-situ hybridization), oat (oligoasthoteratospermia)

Procedia PDF Downloads 54
6277 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 299
6276 Conceptualizing the Cyber Insecurity Risk in the Ethics of Automated Warfare

Authors: Otto Kakhidze, Hoda Alkhzaimi, Adam Ramey, Nasir Memon

Abstract:

This paper provides an alternative, cyber security based a conceptual framework for the ethics of automated warfare. The large body of work produced on fully or partially autonomous warfare systems tends to overlook malicious security factors as in the possibility of technical attacks on these systems when it comes to the moral and legal decision-making. The argument provides a risk-oriented justification to why technical malicious risks cannot be dismissed in legal, ethical and policy considerations when warfare models are being implemented and deployed. The assumptions of the paper are supported by providing a broader model that contains the perspective of technological vulnerabilities through the lenses of the Game Theory, Just War Theory as well as standard and non-standard defense ethics. The paper argues that a conventional risk-benefit analysis without considering ethical factors is insufficient for making legal and policy decisions on automated warfare. This approach will provide the substructure for security and defense experts as well as legal scholars, ethicists and decision theorists to work towards common justificatory grounds that will accommodate the technical security concerns that have been overlooked in the current legal and policy models.

Keywords: automated warfare, ethics of automation, inherent hijacking, security vulnerabilities, risk, uncertainty

Procedia PDF Downloads 357
6275 Determination of the Cooling Rate Dependency of High Entropy Alloys Using a High-Temperature Drop-on-Demand Droplet Generator

Authors: Saeedeh Imani Moqadam, Ilya Bobrov, Jérémy Epp, Nils Ellendt, Lutz Mädler

Abstract:

High entropy alloys (HEAs), having adjustable properties and enhanced stability compared with intermetallic compounds, are solid solution alloys that contain more than five principal elements with almost equal atomic percentage. The concept of producing such alloys pave the way for developing advanced materials with unique properties. However, the synthesis of such alloys may require advanced processes with high cooling rates depending on which alloy elements are used. In this study, the micro spheres of different diameters of HEAs were generated via a drop-on-demand droplet generator and subsequently solidified during free-fall in an argon atmosphere. Such droplet generators can generate individual droplets with high reproducibility regarding droplet diameter, trajectory and cooling while avoiding any interparticle momentum or thermal coupling. Metallography as well as X-ray diffraction investigations for each diameter of the generated metallic droplets where then carried out to obtain information about the microstructural state. To calculate the cooling rate of the droplets, a droplet cooling model was developed and validated using model alloys such as CuSn%6 and AlCu%4.5 for which a correlation of secondary dendrite arm spacing (SDAS) and cooling rate is well-known. Droplets were generated from these alloys and their SDAS was determined using quantitative metallography. The cooling rate was then determined from the SDAS and used to validate the cooling rates obtained from the droplet cooling model. The application of that model on the HEA then leads to the cooling rate dependency and hence to the identification of process windows for the synthesis of these alloys. These process windows were then compared with cooling rates obtained in processes such as powder production, spray forming, selective laser melting and casting to predict if a synthesis is possible with these processes.

Keywords: cooling rate, drop-on-demand, high entropy alloys, microstructure, single droplet generation, X-ray Diffractometry

Procedia PDF Downloads 211
6274 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building

Authors: G. Wimmers

Abstract:

The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.

Keywords: air changes, airtightness, envelope design, industrial building, passive house

Procedia PDF Downloads 148
6273 Analytical Formulae for the Approach Velocity Head Coefficient

Authors: Abdulrahman Abdulrahman

Abstract:

Critical depth meters, such as abroad crested weir, Venture Flume and combined control flume are standard devices for measuring flow in open channels. The discharge relation for these devices cannot be solved directly, but it needs iteration process to account for the approach velocity head. In this paper, analytical solution was developed to calculate the discharge in a combined critical depth-meter namely, a hump combined with lateral contraction in rectangular channel with subcritical approach flow including energy losses. Also analytical formulae were derived for approach velocity head coefficient for different types of critical depth meters. The solution was derived by solving a standard cubic equation considering energy loss on the base of trigonometric identity. The advantage of this technique is to avoid iteration process adopted in measuring flow by these devices. Numerical examples are chosen for demonstration of the proposed solution.

Keywords: broad crested weir, combined control meter, control structures, critical flow, discharge measurement, flow control, hydraulic engineering, hydraulic structures, open channel flow

Procedia PDF Downloads 274
6272 Pre-Experimental Research to Investigate the Retention of Basic and Advanced Life Support Measures Knowledge and Skills by Qualified Nurses Following a Course in Professional Development in a Tertiary Teaching Hospital

Authors: Ram Sharan Mehta, Gayanandra Malla, Anita Gurung, Anu Aryal, Divya Labh, Hricha Neupane

Abstract:

Objectives: Lack of resuscitation skills of nurses and doctors in basic life support (BLS) and advanced life support (ALS) has been identified as a contributing factor to poor outcomes of cardiac arrest victims. The objective of this study was to examine retention of life support measures (BLS/ALS) knowledge and skills of nurses following education intervention programme. Materials and Methods: Pre-experimental research design was used to conduct the study among the nurses working in medical units of B.P Koirala Institute of Health Sciences, where CPR is very commonly performed. Using convenient sampling technique total of 20 nurses agreed to participate and give consent were included in the study. The theoretical, demonstration and re-demonstration were arranged involving the trained doctors and nurses during the three hours educational session. Post-test was carried out after two week of education intervention programme. The 2010 BLS & ALS guidelines were used as guide for the study contents. The collected data were analyzed using SPSS-15 software. Results: It was found that there is significant increase in knowledge after education intervention in the components of life support measures (BLS/ALS) i.e. ratio of chest compression to ventilation in BLS (P=0.001), correct sequence of CPR (p <0.001), rate of chest compression in ALS (P=0.001), the depth of chest compression in adult CPR (p<0.001), and position of chest compression in CPR (P=0.016). Nurses were well appreciated the programme and request to continue in future for all the nurses. Conclusions: At recent BLS/ALS courses (2010), a significant number of nurses remain without any such training. Action is needed to ensure all nurses receive BLS training and practice this skill regularly in order to retain their knowledge.

Keywords: pre-experimental, basic and advance life support, nurses, sampling technique

Procedia PDF Downloads 254
6271 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India

Authors: Zafar Tabrez, Nizamuddin Khan

Abstract:

Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.

Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern

Procedia PDF Downloads 312
6270 A LED Warning Vest as Safety Smart Textile and Active Cooperation in a Working Group for Building a Normative Standard

Authors: Werner Grommes

Abstract:

The institute of occupational safety and health works in a working group for building a normative standard for illuminated warning vests and did a lot of experiments and measurements as basic work (cooperation). Intelligent car headlamps are able to suppress conventional warning vests with retro-reflective stripes as a disturbing light. Illuminated warning vests are therefore required for occupational safety. However, they must not pose any danger to the wearer or other persons. Here, the risks of the batteries (lithium types), the maximum brightness (glare) and possible interference radiation from the electronics on the implant carrier must be taken into account. The all-around visibility, as well as the required range, play an important role here. For the study, many luminance measurements of already commercially available LEDs and electroluminescent warning vests, as well as their electromagnetic interference fields and aspects of electrical safety, were measured. The results of this study showed that LED lighting is all far too bright and causes strong glare. The integrated controls with pulse modulation and switching regulators cause electromagnetic interference fields. Rechargeable lithium batteries can explode depending on the temperature range. Electroluminescence brings even more hazards. A test method was developed for the evaluation of visibility at distances of 50, 100, and 150 m, including the interview of test persons. A measuring method was developed for the detection of glare effects at close range with the assignment of the maximum permissible luminance. The electromagnetic interference fields were tested in the time and frequency ranges. A risk and hazard analysis were prepared for the use of lithium batteries. The range of values for luminance and risk analysis for lithium batteries were discussed in the standards working group. These will be integrated into the standard. This paper gives a brief overview of the topics of illuminated warning vests, which takes into account the risks and hazards for the vest wearer or others

Keywords: illuminated warning vest, optical tests and measurements, risks, hazards, optical glare effects, LED, E-light, electric luminescent

Procedia PDF Downloads 113
6269 Onco@Home: Comparing the Costs, Revenues, and Patient Experience of Cancer Treatment at Home with the Standard of Care

Authors: Sarah Misplon, Wim Marneffe, Johan Helling, Jana Missiaen, Inge Decock, Dries Myny, Steve Lervant, Koen Vaneygen

Abstract:

The aim of this study was twofold. First, we investigated whether the current funding from the national health insurance (NHI) of home hospitalization (HH) for oncological patients is sufficient in Belgium. Second, we compared patient’s experiences and preferences of HH to the standard of care (SOC). Two HH models were examined in three Belgian hospitals and three home nursing organizations. In a first HH model, the blood draw and monitoring prior to intravenous therapy were performed by a trained home nurse at the patient’s home the day before the visit to the day hospital. In a second HH model, the administration of two subcutaneous treatments was partly provided at home instead of in the hospital. Therefore, we conducted (1) a bottom-up micro-costing study to compare the costs and revenues for the providers (hospitals and home care organizations), and (2) a cross-sectional survey to compare patient’s experiences and preferences of the SOC group and the HH group. Our results show that HH patients prefer HH and none of them wanted to return to SOC, although the satisfaction of patients was not significantly different between the two categories. At the same time, we find that costs associated to HH are higher overall. Comparing revenues with costs, we conclude that the current funding from NHI of HH for oncological patients is insufficient.

Keywords: cost analysis, health insurance, preference, home hospitalization

Procedia PDF Downloads 122
6268 Investigation of the Acoustic Properties of Recycled Felt Panels and Their Application in Classrooms and Multi-Purpose Halls

Authors: Ivanova B. Natalia, Djambova Т. Svetlana, Hristev S. Ivailo

Abstract:

The acoustic properties of recycled felt panels have been investigated using various methods. Experimentally, the sound insulation of the panels has been evaluated for frequencies in the range of 600 Hz to 4000 Hz, utilizing a small-sized acoustic chamber. Additionally, the sound absorption coefficient for the frequency range of 63 Hz to 4000 Hz was measured according to the EN ISO 354 standard in a laboratory reverberation room. This research was deemed necessary after conducting reverberation time measurements of a university classroom following the EN ISO 3382-2 standard. The measurements indicated values of 2.86 s at 500 Hz, 3.23 s at 1000 Hz, and 2.53 s at 2000 Hz, which significantly exceeded the requirements set by the national regulatory framework (0.6s) for such premises. For this reason, recycled felt panels have been investigated in the laboratory, showing very good acoustic properties at high frequencies. To enhance performance in the low frequencies, the influence of the distance of the panel spacing was examined. Furthermore, the sound insulation of the panels was studied to expand the possibilities of their application, both for the acoustic treatment of educational and multifunctional halls and for sound insulation purposes (e.g., a suspended ceiling with an air gap passing from room to room). As a conclusion, a theoretical acoustic design of the classroom has been carried out with suggestions for improvements to achieve the necessary acoustic and aesthetic parameters for such rooms.

Keywords: acoustic panels, recycled felt, sound absorption, sound insulation, classroom acoustics

Procedia PDF Downloads 90
6267 A General Framework to Successfully Operate the Digital Transformation Process in the Post-COVID Era

Authors: Driss Kettani

Abstract:

In this paper, we shed light on “Digital Divide 2.0,” which we see as COVID-19’s Version of the Digital Divide! We believe that “Fighting” against Digital Divide 2.0 necessitates for a Country to be seriously advanced in the Global Digital Transformation that is, naturally, a complex, delicate, costly and long-term Process. We build an argument supporting our assumption and, from there, we present the foundations of a computational framework to guide and streamline Digital Transformation at all levels.

Keywords: digital divide 2.0, digital transformation, ICTs for development, computational outcomes assessment

Procedia PDF Downloads 177
6266 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques

Authors: Umit Cali

Abstract:

The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.

Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids

Procedia PDF Downloads 518
6265 The Development of Online-Class Scheduling Management System Conducted by the Case Study of Department of Social Science: Faculty of Humanities and Social Sciences Suan Sunandha Rajabhat University

Authors: Wipada Chaiwchan, Patcharee Klinhom

Abstract:

This research is aimed to develop the online-class scheduling management system and improve as a complex problem solution, this must take into consideration in various conditions and factors. In addition to the number of courses, the number of students and a timetable to study, the physical characteristics of each class room and regulations used in the class scheduling must also be taken into consideration. This system is developed to assist management in the class scheduling for convenience and efficiency. It can provide several instructors to schedule simultaneously. Both lecturers and students can check and publish a timetable and other documents associated with the system online immediately. It is developed in a web-based application. PHP is used as a developing tool. The database management system was MySQL. The tool that is used for efficiency testing of the system is questionnaire. The system was evaluated by using a Black-Box testing. The sample was composed of 2 groups: 5 experts and 100 general users. The average and the standard deviation of results from the experts were 3.50 and 0.67. The average and the standard deviation of results from the general users were 3.54 and 0.54. In summary, the results from the research indicated that the satisfaction of users was in a good level. Therefore, this system could be implemented in an actual workplace and satisfy the users’ requirement effectively

Keywords: timetable, schedule, management system, online

Procedia PDF Downloads 237
6264 The Security Trade-Offs in Resource Constrained Nodes for IoT Application

Authors: Sultan Alharby, Nick Harris, Alex Weddell, Jeff Reeve

Abstract:

The concept of the Internet of Things (IoT) has received much attention over the last five years. It is predicted that the IoT will influence every aspect of our lifestyles in the near future. Wireless Sensor Networks are one of the key enablers of the operation of IoTs, allowing data to be collected from the surrounding environment. However, due to limited resources, nature of deployment and unattended operation, a WSN is vulnerable to various types of attack. Security is paramount for reliable and safe communication between IoT embedded devices, but it does, however, come at a cost to resources. Nodes are usually equipped with small batteries, which makes energy conservation crucial to IoT devices. Nevertheless, security cost in terms of energy consumption has not been studied sufficiently. Previous research has used a security specification of 802.15.4 for IoT applications, but the energy cost of each security level and the impact on quality of services (QoS) parameters remain unknown. This research focuses on the cost of security at the IoT media access control (MAC) layer. It begins by studying the energy consumption of IEEE 802.15.4 security levels, which is followed by an evaluation for the impact of security on data latency and throughput, and then presents the impact of transmission power on security overhead, and finally shows the effects of security on memory footprint. The results show that security overhead in terms of energy consumption with a payload of 24 bytes fluctuates between 31.5% at minimum level over non-secure packets and 60.4% at the top security level of 802.15.4 security specification. Also, it shows that security cost has less impact at longer packet lengths, and more with smaller packet size. In addition, the results depicts a significant impact on data latency and throughput. Overall, maximum authentication length decreases throughput by almost 53%, and encryption and authentication together by almost 62%.

Keywords: energy consumption, IEEE 802.15.4, IoT security, security cost evaluation

Procedia PDF Downloads 168
6263 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 62
6262 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study

Authors: Jayashree Aanand, Gajjam

Abstract:

The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.

Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit

Procedia PDF Downloads 126
6261 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling

Authors: Amal G. Kurian

Abstract:

The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.

Keywords: mathematical modeling, HCV, suspension, ride analysis

Procedia PDF Downloads 258
6260 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case

Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza

Abstract:

The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.

Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype

Procedia PDF Downloads 418
6259 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: economic dispatch, gaussian selection operator, prohibited operating zones, ramp rate limits

Procedia PDF Downloads 130