Search results for: distribution system and optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23232

Search results for: distribution system and optimization

21942 Design-Analysis and Optimization of 10 MW Permanent Magnet Surface Mounted Off-Shore Wind Generator

Authors: Mamidi Ramakrishna Rao, Jagdish Mamidi

Abstract:

With advancing technology, the market environment for wind power generation systems has become highly competitive. The industry has been moving towards higher wind generator power ratings, in particular, off-shore generator ratings. Current off-shore wind turbine generators are in the power range of 10 to 12 MW. Unlike traditional induction motors, slow-speed permanent magnet surface mounted (PMSM) high-power generators are relatively challenging and designed differently. In this paper, PMSM generator design features have been discussed and analysed. The focus attention is on armature windings, harmonics, and permanent magnet. For the power ratings under consideration, the generator air-gap diameters are in the range of 8 to 10 meters, and active material weigh ~60 tons and above. Therefore, material weight becomes one of the critical parameters. Particle Swarm Optimization (PSO) technique is used for weight reduction and performance improvement. Four independent variables have been considered, which are air gap diameter, stack length, magnet thickness, and winding current density. To account for core and teeth saturation, preventing demagnetization effects due to short circuit armature currents, and maintaining minimum efficiency, suitable penalty functions have been applied. To check for performance satisfaction, a detailed analysis and 2D flux plotting are done for the optimized design.

Keywords: offshore wind generator, PMSM, PSO optimization, design optimization

Procedia PDF Downloads 149
21941 Consideration of Uncertainty in Engineering

Authors: A. Mohammadi, M. Moghimi, S. Mohammadi

Abstract:

Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.

Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method

Procedia PDF Downloads 409
21940 The Implementation of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications

Authors: Mohamed R. Mhereeg

Abstract:

The paper discusses the implementation of the MultiAgent classification System (MACS) and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies, which are the .NET widows service based agents, the Windows Communication Foundation (WCF) services, the Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). Microsoft's .NET windows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW. The Monitoring Agents (MAs) were configured to execute automatically to monitor excel spreadsheets development activities by content. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent (DUA) residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers.

Keywords: MACS, implementation, multi-agent, SOA, autonomous, WCF

Procedia PDF Downloads 268
21939 Modeling of Polyethylene Particle Size Distribution in Fluidized Bed Reactors

Authors: R. Marandi, H. Shahrir, T. Nejad Ghaffar Borhani, M. Kamaruddin

Abstract:

In the present study, a steady state population balance model was developed to predict the polymer particle size distribution (PSD) in ethylene gas phase fluidized bed olefin polymerization reactors. The multilayer polymeric flow model (MPFM) was used to calculate the growth rate of a single polymer particle under intra-heat and mass transfer resistance. The industrial plant data were used to calculate the growth rate of polymer particle and the polymer PSD. Numerical simulations carried out to describe the influence of effective monomer diffusion coefficient, polymerization rate and initial catalyst size on the catalyst particle growth and final polymer PSD. The results present that the intra-heat and mass limitation is important for the ethylene polymerization, the growth rate of particle and the polymer PSD in the fluidized bed reactor. The effect of the agglomeration on the PSD is also considered. The result presents that the polymer particle size distribution becomes broader as the agglomeration exits.

Keywords: population balance, olefin polymerization, fluidized bed reactor, particle size distribution, agglomeration

Procedia PDF Downloads 325
21938 Targeting Mineral Resources of the Upper Benue trough, Northeastern Nigeria Using Linear Spectral Unmixing

Authors: Bello Yusuf Idi

Abstract:

The Gongola arm of the Upper Banue Trough, Northeastern Nigeria is predominantly covered by the outcrops of Limestone-bearing rocks in form of Sandstone with intercalation of carbonate clay, shale, basaltic, felsphatic and migmatide rocks at subpixel dimension. In this work, subpixel classification algorithm was used to classify the data acquired from landsat 7 Enhance Thematic Mapper (ETM+) satellite system with the aim of producing fractional distribution image for three most economically important solid minerals of the area: Limestone, Basalt and Migmatide. Linear Spectral Unmixing (LSU) algorithm was used to produce fractional distribution image of abundance of the three mineral resources within a 100Km2 portion of the area. The results show that the minerals occur at different proportion all over the area. The fractional map could therefore serve as a guide to the ongoing reconnaissance for the economic potentiality of the formation.

Keywords: linear spectral un-mixing, upper benue trough, gongola arm, geological engineering

Procedia PDF Downloads 365
21937 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of distributed generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the intelligent electronic device (IED) and a supervisory control and data acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a microgrid protection management system (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, group Setting, microgrid

Procedia PDF Downloads 453
21936 Distribution Patterns of the Renieramycin-M-Producing Blue Sponge, Xestospongia sp. (De Laubenfels, 1932) (Phylum: Porifera, Class: Demospongiae) in Puerto Galera, Oriental Mindoro, Philippines

Authors: Geminne Manzano, Clairecynth Yu, Lilibeth Salvador-Reyes, Viviene Santiago, Porfirio AliñO

Abstract:

The distribution and abundance patterns of many marine sessile organisms such as sponges vary among and within reefs. Determining the factors affecting its distribution is essential especially for organisms that produce secondary metabolites with pharmaceutical importance. In this study, the small-scale distribution patterns of the Philippine blue sponge, Xestospongia sp. in relation to some ecological factors were examined. The relationship between the renieramycin-M production and their benthic attributes were also determined. Ecological surveys were conducted on two stations with varying depth and exposure located in Oriental Mindoro, Philippines. Three 30 by 6m belt transect were used to assess the sponge abundance at each station. The substratum of the sponges was also characterized. Fish visual census observations were also taken together with the photo transect methods benthic surveys. Sponge samples were also collected for the extraction of Renieramycin-M and for further chemical analysis. Varying distribution patterns were observed to be attributed to the combination of different ecological and environmental factors. The amount of Renieramycin-production also varied in each station. The common substratum for blue sponges includes hard and soft corals, as well as, dead coral with algal patches. Blue sponges from exposed habitat frequently grow associated with massive and branching corals, Porites sp., while the most frequent substrate found on sheltered habitats is the coral Pavona sp. Exploring the influence of ecological and environmental parameters on the abundance and distribution of sponge assemblages provide ecological insights and their potential applications to pharmaceutical studies. The results of this study provide further impetus in pursuing studies into patterns and processes of the Philippine blue sponge, Xestospongia sp. distribution in relation to the chemical ecology of its secondary metabolites.

Keywords: distribution patterns, Porifera, Renieramycin-M, sponge assemblages, Xestospongia sp.

Procedia PDF Downloads 264
21935 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme

Authors: Andrey V. Timofeev, Dmitry V. Egorov

Abstract:

This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.

Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier

Procedia PDF Downloads 462
21934 Change Point Analysis in Average Ozone Layer Temperature Using Exponential Lomax Distribution

Authors: Amjad Abdullah, Amjad Yahya, Bushra Aljohani, Amani Alghamdi

Abstract:

Change point detection is an important part of data analysis. The presence of a change point refers to a significant change in the behavior of a time series. In this article, we examine the detection of multiple change points of parameters of the exponential Lomax distribution, which is broad and flexible compared with other distributions while fitting data. We used the Schwarz information criterion and binary segmentation to detect multiple change points in publicly available data on the average temperature in the ozone layer. The change points were successfully located.

Keywords: binary segmentation, change point, exponentialLomax distribution, information criterion

Procedia PDF Downloads 171
21933 Research on Optimization Strategies for the Negative Space of Urban Rail Transit Based on Urban Public Art Planning

Authors: Kexin Chen

Abstract:

As an important method of transportation to solve the demand and supply contradiction generated in the rapid urbanization process, urban rail traffic system has been rapidly developed over the past ten years in China. During the rapid development, the space of urban rail Transit has encountered many problems, such as space simplification, sensory experience dullness, and poor regional identification, etc. This paper, focus on the study of the negative space of subway station and spatial softening, by comparing and learning from foreign cases. The article sorts out cases at home and abroad, make a comparative study of the cases, analysis more diversified setting of public art, and sets forth propositions on the domestic type of public art in the space of urban rail transit for reference, then shows the relationship of the spatial attribute in the space of urban rail transit and public art form. In this foundation, it aims to characterize more diverse setting ways for public art; then suggests the three public art forms corresponding properties, such as static presenting mode, dynamic image mode, and spatial softening mode; finds out the method of urban public art to optimize negative space.

Keywords: diversification, negative space, optimization strategy, public art planning

Procedia PDF Downloads 204
21932 Efficiency and Reliability Analysis of SiC-Based and Si-Based DC-DC Buck Converters in Thin-Film PV Systems

Authors: Elaid Bouchetob, Bouchra Nadji

Abstract:

This research paper compares the efficiency and reliability (R(t)) of SiC-based and Si-based DC-DC buck converters in thin layer PV systems with an AI-based MPPT controller. Using Simplorer/Simulink simulations, the study assesses their performance under varying conditions. Results show that the SiC-based converter outperforms the Si-based one in efficiency and cost-effectiveness, especially in high temperature and low irradiance conditions. It also exhibits superior reliability, particularly at high temperature and voltage. Reliability calculation (R(t)) is analyzed to assess system performance over time. The SiC-based converter demonstrates better reliability, considering factors like component failure rates and system lifetime. The research focuses on the buck converter's role in charging a Lithium battery within the PV system. By combining the SiC-based converter and AI-based MPPT controller, higher charging efficiency, improved reliability, and cost-effectiveness are achieved. The SiC-based converter proves superior under challenging conditions, emphasizing its potential for optimizing PV system charging. These findings contribute insights into the efficiency, reliability, and reliability calculation of SiC-based and Si-based converters in PV systems. SiC technology's advantages, coupled with advanced control strategies, promote efficient and sustainable energy storage using Lithium batteries. The research supports PV system design and optimization for reliable renewable energy utilization.

Keywords: efficiency, reliability, artificial intelligence, sic device, thin layer, buck converter

Procedia PDF Downloads 55
21931 Teaching the Binary System via Beautiful Facts from the Real Life

Authors: Salem Ben Said

Abstract:

In recent times the decimal number system to which we are accustomed has received serious competition from the binary number system. In this note, an approach is suggested to teaching and learning the binary number system using examples from the real world. More precisely, we will demonstrate the utility of the binary system in describing the optimal strategy to win the Chinese Nim game, and in telegraphy by decoding the hidden message on Perseverance’s Mars parachute written in the language of binary system. Finally, we will answer the question, “why do modern computers prefer the ternary number system instead of the binary system?”. All materials are provided in a format that is conductive to classroom presentation and discussion.

Keywords: binary number system, Nim game, telegraphy, computers prefer the ternary system

Procedia PDF Downloads 178
21930 Historical Metaphors in Insurance: A Journey

Authors: Anjuman Antil, Anuj Kapoor, Neha Saini

Abstract:

Purpose: The purpose of this paper is to study the evolution of insurance in India and the world. The paper also traced the historical basis of life insurance in the world and how it emerged as a major sector in India’s economy. The promotional strategies and distribution channel of top three companies in the Indian insurance sector are also discussed. Design/methodology/approach: The paper examined the secondary data which includes the reports issued by Insurance Regulatory Authority of India, websites of companies, books, and journals relevant to the study. Findings: The paper argued the role and importance of insurance in an emerging economy. The challenges and opportunities of the insurance sector are briefed out. The emerging areas in the insurance sector in terms of promotional strategies and distribution channel are also listed. Implications: The historical evolution can be studied by companies while formulating their strategies. It will help them analyse the insurance sector, how things have changed and how to change with the changing times. Originality/value: This paper gives comprehensive data regarding the background of the insurance sector. Along with historical perspective, marketing and distribution, current and future trends have been discussed.

Keywords: insurance, evolution, life insurance, marketing, distribution channels

Procedia PDF Downloads 229
21929 Real-Time Observation of Concentration Distribution for Mix Liquids including Water in Micro Fluid Channel with Near-Infrared Spectroscopic Imaging Method

Authors: Hiroki Takiguchi, Masahiro Furuya, Takahiro Arai

Abstract:

In order to quantitatively comprehend thermal flow for some industrial applications such as nuclear and chemical reactors, detailed measurements for temperature and abundance (concentration) of materials at high temporal and spatial resolution are required. Additionally, rigorous evaluation of the size effect is also important for practical realization. This paper introduces a real-time spectroscopic imaging method in micro scale field, which visualizes temperature and concentration distribution of a liquid or mix liquids with near-infrared (NIR) wavelength region. This imaging principle is based on absorption of pre-selected narrow band from absorption spectrum peak or its dependence property of target liquid in NIR region. For example, water has a positive temperature sensitivity in the wavelength at 1905 nm, therefore the temperature of water can be measured using the wavelength band. In the experiment, the real-time imaging observation of concentration distribution in micro channel was demonstrated to investigate the applicability of micro-scale diffusion coefficient and temperature measurement technique using this proposed method. The effect of thermal diffusion and binary mutual diffusion was evaluated with the time-series visualizations of concentration distribution.

Keywords: near-infrared spectroscopic imaging, micro fluid channel, concentration distribution, diffusion phenomenon

Procedia PDF Downloads 155
21928 Mirror-Like Effect Based on Correlations among Atoms

Authors: Qurrat-ul-Ain Gulfam, Zbigniew Ficek

Abstract:

The novel idea to use single atoms as highly reflecting mirrors has recently gained much attention. Usually, to observe the reflective nature of an atom, it is required to couple the atom to an external medium such that a directional spontaneous emission could be realized. We propose an alternative way to achieve the directional emission by considering a system of correlated atoms in free space. It is well known that mutually interacting atoms have a strong tendency to emit the radiation along particular discrete directions. That relieves one from the stingy condition of associating the atomic system to another media and facilitates the experimental implementation to a large degree. Moreover, realistic 3-dimensional collective emission can be taken into account in the dynamics. Two interesting spatial setups have been considered; one where a probe atom is confined in a linear cavity formed by two atomic mirrors and, the other where a probe atom faces a chain of correlated atoms. We observe an evidence of the mirror-like effect in a simple system of a chain of three atoms. The angular distribution of the radiation intensity observed in the far field is greatly affected by the atomic interactions. Hence, suitable directions for enhanced reflectivity can be determined.

Keywords: atom-mirror effect, correlated system, dipole-dipole interactions, intensity

Procedia PDF Downloads 544
21927 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel

Authors: Farhad Aalizadeh, Ali Moosavi

Abstract:

This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.

Keywords: MHD, channel clots, magnetic nanoparticles, simulations

Procedia PDF Downloads 365
21926 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 274
21925 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 341
21924 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks

Authors: Nicholas Aerne, John P. Parmigiani

Abstract:

There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.

Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply

Procedia PDF Downloads 199
21923 Optimal Design of Linear Generator to Recharge the Smartphone Battery

Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha

Abstract:

Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.

Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design

Procedia PDF Downloads 342
21922 Fertilizer Procurement and Distribution in Nigeria: Assessing Policy against Implementation

Authors: Jacob Msughter Gwa, Rhys Williams

Abstract:

It is widely known that food security is a major concern in Sub-Saharan Africa. In many regions, including Nigeria, this is due to an agriculture-old problem of soil erosion beyond replacement levels. It seems that the use of fertilizer would be an immediate solution as it can boost agricultural productivity, and low agricultural productivity is attributed to the low use of fertilizers in Nigeria. The Government of Nigeria has been addressing the challenges of food shortage but with limited success. The utilisation of a practical and efficient subsidy programme in addressing this issue seems to be needed. However, the problem of procurement and distribution changes from one stage of subsidy to another. This paper looks at the difference between the ideal and the actual implementation of agricultural fertilizer policies in Nigeria, as it currently runs the risk of meeting required standards on paper but missing the desired real outcomes, and recognises the need to close the gap between the paper work and the realities on the ground.

Keywords: agricultural productivity, fertilizer distribution, fertilizer procurement, Nigeria

Procedia PDF Downloads 360
21921 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit

Authors: Ahmed Elrewainy

Abstract:

Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.

Keywords: basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets

Procedia PDF Downloads 192
21920 A Comparative Study of the Proposed Models for the Components of the National Health Information System

Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi

Abstract:

National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.

Keywords: National Health Information System, components of the NHIS, Lippeveld Model

Procedia PDF Downloads 418
21919 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations

Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri

Abstract:

Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.

Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size

Procedia PDF Downloads 216
21918 Through Integrated Project Management and Systems Engineering to Support System Design Development: A Project Management-based Systems Engineering Approach

Authors: Xiaojing Gao, James Njuguna

Abstract:

This paper emphasizes the importance of integrating project management and systems engineering for innovative system design and production development. The research highlights the need for a flexible approach that unifies these disciplines, as their isolation often leads to communication challenges and complexity within multidisciplinary teams. The paper aims to elucidate the intricate relationship between project management and systems engineering, recommending the consolidation of engineering disciplines into a single lifecycle for improved support of the design and development process. The research identifies a synergy between these disciplines, focusing on streamlining information communication during product design and development. The insights gained from this process can lead to product design optimization. Additionally, the paper introduces a proposed Project Management-Based Systems Engineering (PMBSE) framework, emphasizing effective communication, efficient processes, and advanced tools to enhance product development outcomes within the product lifecycle.

Keywords: system engineering, product design and development, project management, cross-disciplinary

Procedia PDF Downloads 66
21917 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System

Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli

Abstract:

This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.

Keywords: feature selection, genetic algorithm, optimization, wood recognition system

Procedia PDF Downloads 538
21916 Kerr Electric-Optic Measurement of Electric Field and Space Charge Distribution in High Voltage Pulsed Transformer Oil

Authors: Hongda Guo, Wenxia Sima

Abstract:

Transformer oil is widely used in power systems because of its excellent insulation properties. The accurate measurement of electric field and space charge distribution in transformer oil under high voltage impulse has important theoretical and practical significance, but still remains challenging to date because of its low Kerr constant. In this study, the continuous electric field and space charge distribution over time between parallel-plate electrodes in high-voltage pulsed transformer oil based on the Kerr effect is directly measured using a linear array photoelectrical detector. Experimental results demonstrate the applicability and reliability of this method. This study provides a feasible approach to further study the space charge effects and breakdown mechanisms in transformer oil.

Keywords: electric field, Kerr, space charge, transformer oil

Procedia PDF Downloads 360
21915 Design and Development of High Strength Aluminium Alloy from Recycled 7xxx-Series Material Using Bayesian Optimisation

Authors: Alireza Vahid, Santu Rana, Sunil Gupta, Pratibha Vellanki, Svetha Venkatesh, Thomas Dorin

Abstract:

Aluminum is the preferred material for lightweight applications and its alloys are constantly improving. The high strength 7xxx alloys have been extensively used for structural components in aerospace and automobile industries for the past 50 years. In the next decade, a great number of airplanes will be retired, providing an obvious source of valuable used metals and great demand for cost-effective methods to re-use these alloys. The design of proper aerospace alloys is primarily based on optimizing strength and ductility, both of which can be improved by controlling the additional alloying elements as well as heat treatment conditions. In this project, we explore the design of high-performance alloys with 7xxx as a base material. These designed alloys have to be optimized and improved to compare with modern 7xxx-series alloys and to remain competitive for aircraft manufacturing. Aerospace alloys are extremely complex with multiple alloying elements and numerous processing steps making optimization often intensive and costly. In the present study, we used Bayesian optimization algorithm, a well-known adaptive design strategy, to optimize this multi-variable system. An Al alloy was proposed and the relevant heat treatment schedules were optimized, using the tensile yield strength as the output to maximize. The designed alloy has a maximum yield strength and ultimate tensile strength of more than 730 and 760 MPa, respectively, and is thus comparable to the modern high strength 7xxx-series alloys. The microstructure of this alloy is characterized by electron microscopy, indicating that the increased strength of the alloy is due to the presence of a high number density of refined precipitates.

Keywords: aluminum alloys, Bayesian optimization, heat treatment, tensile properties

Procedia PDF Downloads 112
21914 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach

Authors: Mohammad H. Almomani

Abstract:

In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.

Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization

Procedia PDF Downloads 351
21913 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 475