Search results for: Fault Coverage
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 531

Search results for: Fault Coverage

51 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.

Keywords: Masonry Infilled Frame, Energy Methods, Near-fault Ground Motions, Pushover Analysis, Nonlinear Dynamic Analysis, Seismic Demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2786
50 Multi-Line Flexible Alternating Current Transmission System (FACTS) Controller for Transient Stability Analysis of a Multi-Machine Power System Network

Authors: A.V.Naresh Babu, S.Sivanagaraju

Abstract:

A considerable progress has been achieved in transient stability analysis (TSA) with various FACTS controllers. But, all these controllers are associated with single transmission line. This paper is intended to discuss a new approach i.e. a multi-line FACTS controller which is interline power flow controller (IPFC) for TSA of a multi-machine power system network. A mathematical model of IPFC, termed as power injection model (PIM) presented and this model is incorporated in Newton-Raphson (NR) power flow algorithm. Then, the reduced admittance matrix of a multi-machine power system network for a three phase fault without and with IPFC is obtained which is required to draw the machine swing curves. A general approach based on L-index has also been discussed to find the best location of IPFC to reduce the proximity to instability of a power system. Numerical results are carried out on two test systems namely, 6-bus and 11-bus systems. A program in MATLAB has been written to plot the variation of generator rotor angle and speed difference curves without and with IPFC for TSA and also a simple approach has been presented to evaluate critical clearing time for test systems. The results obtained without and with IPFC are compared and discussed.

Keywords: Flexible alternating current transmission system (FACTS), first swing stability, interline power flow controller (IPFC), power injection model (PIM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
49 Design of QFT-Based Self-Tuning Deadbeat Controller

Authors: H. Mansor, S. B. Mohd Noor

Abstract:

This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.

Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328
48 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
47 Cloning and Functional Characterization of Promoter Elements of the D Hordein Gene from the Barley (Hordeum vulgare L.) by Bioinformatic Tools

Authors: Kobra Nalbandi, Bahram Baghban Kohnehrouz, Khalil Alami Saeed

Abstract:

The low level of foreign genes expression in transgenic plants is a key factor that limits plant genetic engineering. Because of the critical regulatory activity of the promoters on gene transcription, they are studied extensively to improve the efficiency of the plant transgenic system. The strong constitutive promoters, such as CaMV 35S promoter and Ubiqutin 1 maize are usually used in plant biotechnology research. However the expression level of the foreign genes in all tissues is often undesirable. But using a strong seed-specific promoter to limit gene expression in the seed solves such problems. The purpose of this study is to isolate one of the seed specific promoters of Hordeum vulgare. So one of the common varieties of Hordeum vulgare in Iran was selected and their genomes extracted then the D-Hordein promoter amplified using the specific designed primers. Then the amplified fragment of the insert cloned in an appropriate vector and then transformed to E. coli. At last for the final admission of accuracy the cloned fragments sent for sequencing. Sequencing analysis showed that the cloned fragment DHPcontained motifs; like TATA box, CAAT-box, CCGTCC-box, AMYBOX1 and E-box etc., which constituted the seed-specific promoter activity. The results were compared with sequences existing in data banks. D-Hordein promoters of Alger has 99% similarity at 100 % coverage. The results also showed that D-Hordein promoter of barley and HMW promoter of wheat are too similar.

Keywords: Barley, Seed specific promoter, Hordein.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2621
46 A Review on Application of Phase Change Materials in Textiles Finishing

Authors: Mazyar Ahrari, Ramin Khajavi, Mehdi Kamali Dolatabadi, Tayebeh Toliyat, Abosaeed Rashidi

Abstract:

Fabric as the first and most common layer that is in permanent contact with human skin is a very good interface to provide coverage, as well as heat and cold insulation. Phase change materials (PCMs) are organic and inorganic compounds which have the capability of absorbing and releasing noticeable amounts of latent heat during phase transitions between solid and liquid phases at a low temperature range. PCMs come across phase changes (liquid-solid and solid-liquid transitions) during absorbing and releasing thermal heat; so, in order to use them for a long time, they should have been encapsulated in polymeric shells, so-called microcapsules. Microencapsulation and nanoencapsulation methods have been developed in order to reduce the reactivity of a PCM with outside environment, promoting the ease of handling, decreasing the diffusion and evaporation rates. Methods of incorporation of PCMs in textiles such as electrospinning and determining thermal properties had been summarized. Paraffin waxes catch a lot of attention due to their high thermal storage density, repeatability of phase change, thermal stability, small volume change during phase transition, chemical stability, non-toxicity, non-flammability, non-corrosive and low cost and they seem to play a key role in confronting with climate change and global warming. In this article, we aimed to review the researches concentrating on the characteristics of PCMs and new materials and methods of microencapsulation.

Keywords: Thermoregulation, phase change materials, microencapsulation, thermal energy storage, nanoencapsulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
45 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots

Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov

Abstract:

This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.

Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006
44 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies

Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan

Abstract:

In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.

Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
43 To Cloudify or Not to Cloudify

Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi

Abstract:

As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors.

The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter.

This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services.

Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change.

The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.

Keywords: Cloud computing, cloud deployment models, cloud service models and deciding factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290
42 Evaluating Efficiency of Nina Distribution Company Using Window Data Envelopment Analysis and Malmquist Index

Authors: Hossein Taherian Far, Ali Bazaee

Abstract:

Achieving continuous sustained economic growth and following economic development can be the target for all countries which are looking for it. In this regard, distribution industry plays an important role in growth and development of any nation. So, estimating the efficiency and productivity of the so called industry and identifying factors influencing it, is very necessary. The objective of the present study is to measure the efficiency and productivity of seven branches of Nina Distribution Company using window data envelopment analysis and Malmquist productivity index from spring 2013 to summer 2015. In this study, using criteria of fixed assets, payroll personnel, operating costs and duration of collection of receivables were selected as inputs and people and net sales, gross profit and percentage of coverage to customers were selected as outputs. Then, the process of performance window data envelopment analysis was driven and process efficiency has been measured using Malmquist index. The results indicate that the average technical efficiency of window Data Envelopment Analysis (DEA) model and fluctuating trend is sustainable. But the average management efficiency in window DEA model is related with negative growth (decline) of about 13%. The mean scale efficiency in all windows, except in the second one which is faced with 8%, shows growth of 18% compared to the first window. On the other hand, the mean change in total factor productivity in all branches of the industry shows average negative growth (decrease) of 12% which are the result of a negative change in technology.

Keywords: Nina Distribution Company branches, window data envelopment analysis, Malmquist productivity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1157
41 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
40 The Coverage of the Object-Oriented Framework Application Class-Based Test Cases

Authors: Jehad Al Dallal, Paul Sorenson

Abstract:

An application framework provides a reusable design and implementation for a family of software systems. Frameworks are introduced to reduce the cost of a product line (i.e., family of products that share the common features). Software testing is a time consuming and costly ongoing activity during the application software development process. Generating reusable test cases for the framework applications at the framework development stage, and providing and using the test cases to test part of the framework application whenever the framework is used reduces the application development time and cost considerably. Framework Interface Classes (FICs) are classes introduced by the framework hooks to be implemented at the application development stage. They can have reusable test cases generated at the framework development stage and provided with the framework to test the implementations of the FICs at the application development stage. In this paper, we conduct a case study using thirteen applications developed using three frameworks; one domain oriented and two application oriented. The results show that, in general, the percentage of the number of FICs in the applications developed using domain frameworks is, on average, greater than the percentage of the number of FICs in the applications developed using application frameworks. Consequently, the reduction of the application unit testing time using the reusable test cases generated for domain frameworks is, in general, greater than the reduction of the application unit testing time using the reusable test cases generated for application frameworks.

Keywords: FICs, object-oriented framework, object-orientedframework application, software testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
39 Analysis and Remediation of Fecal Coliform Bacteria Pollution in Selected Surface Water Bodies of Enugu State of Nigeria

Authors: Chime Charles C., Ikechukwu Alexander Okorie, Ekanem E.J., Kagbu J. A.

Abstract:

The assessment of surface waters in Enugu metropolis for fecal coliform bacteria was undertaken. Enugu urban was divided into three areas (A1, A2 and A3), and fecal coliform bacteria analysed in the surface waters found in these areas for four years (2005-2008). The plate count method was used for the analyses. Data generated were subjected to statistical tests involving; Normality test, Homogeneity of variance test, correlation test, and tolerance limit test. The influence of seasonality and pollution trends were investigated using time series plots. Results from the tolerance limit test at 95% coverage with 95% confidence, and with respect to EU maximum permissible concentration show that the three areas suffer from fecal coliform pollution. To this end, remediation procedure involving the use of saw-dust extracts from three woods namely; Chlorophora-Excelsa (C-Excelsa),Khayan-Senegalensis,(CSenegalensis) and Erythrophylum-Ivorensis (E-Ivorensis) in controlling the coliforms was studied. Results show that mixture of the acetone extracts of the woods show the most effective antibacterial inhibitory activities (26.00mm zone of inhibition) against E-coli. Methanol extract mixture of the three woods gave best inhibitory activity (26.00mm zone of inhibition) against S-areus, and 25.00mm zones of inhibition against E-Aerogenes. The aqueous extracts mixture gave acceptable zones of inhibitions against the three bacteria organisms.

Keywords: Coliform bacteria, Pollution, Remediation, Saw-dust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034
38 Development of a Smart System for Measuring Strain Levels of Natural Gas and Petroleum Pipelines on Earthquake Fault Lines in Türkiye

Authors: Ahmet Yetik, Seyit Ali Kara, Cevat Özarpa

Abstract:

Load changes occur on natural gas and oil pipelines due to natural disasters. The displacement of the soil around the natural gas and oil pipes due to situations that may cause erosion, such as earthquakes, landslides, and floods, is the source of this load change. The exposure of natural gas and oil pipes to variable loads causes deformation, cracks, and breaks in these pipes. Such cracks and breaks can cause significant damage to people and the environment, including the risk of explosions. Especially with the examinations made after natural disasters, it can be easily understood which of the pipes has sustained more damage in those quake-affected regions. It has been determined that earthquakes in Türkiye have caused permanent damage to pipelines. This project was initiated in response to the identification of cracks and gas leaks in the insulation gaskets placed in the pipelines, especially at the junction points. In this study, a SCADA (Supervisory Control and Data Acquisition) application has been developed to monitor load changes caused by natural disasters. The developed SCADA application monitors the changes in the x, y, and z axes of the stresses occurring in the pipes with the help of strain gauge sensors placed on the pipes. For the developed SCADA system, test setups in accordance with the standards were created during the fieldwork. The test setups created were integrated into the SCADA system, and the system was followed up. Thanks to the SCADA system developed with the field application, the load changes that will occur on the natural gas and oil pipes are instantly monitored, and the accumulations that may create a load on the pipes and their surroundings are immediately intervened, and new risks that may arise are prevented. It has contributed to energy supply security, asset management, pipeline holistic management, and overall sustainability in the industry.

Keywords: Earthquake, natural gas pipes, oil pipes, voltage measurement, landslide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93
37 Hypogenic Karstification and Conduit System Controlling by Tectonic Pattern in Foundation Rocks of the Salman Farsi Dam in South-Western Iran

Authors: Mehran Koleini, Jan Louis Van Rooy, Adam Bumby

Abstract:

The Salman Farsi dam project is constructed on the Ghareh Agahaj River about 140km south of Shiraz city in the Zagros Mountains of southwestern Iran. This tectonic province of south-western Iran is characterized by a simple folded sedimentary sequence. The dam foundation rocks compose of the Asmari Formation of Oligo-miocene and generally comprise of a variety of karstified carbonate rocks varying from strong to weak rocks. Most of the rocks exposed at the dam site show a primary porosity due to incomplete diagenetic recrystallization and compaction. In addition to these primary dispositions to weathering, layering conditions (frequency and orientation of bedding) and the subvertical tectonic discontinuities channeled preferably the infiltrating by deep-sited hydrothermal solutions. Consequently the porosity results to be enlarged by dissolution and the rocks are expected to be karstified and to develop cavities in correspondence of bedding, major joint planes and fault zones. This kind of karsts is named hypogenic karsts which associated to the ascendant warm solutions. Field observations indicate strong karstification and vuggy intercalations especially in the middle part of the Asmari succession. The biggest karst in the dam axis which identified by speleological investigations is Golshany Cave with volume of about 150,000 m3. The tendency of the Asmari limestone for strong dissolution can alert about the seepage from the reservoir and area of the dam locality.      

Keywords: Asmari Limestone, Karstification, Salman Farsi Dam, Tectonic Pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2684
36 Behaviour of Base-Isolated Structures with High Initial Isolator Stiffness

Authors: Ajay Sharma, R.S. Jangid

Abstract:

Analytical seismic response of multi-story building supported on base isolation system is investigated under real earthquake motion. The superstructure is idealized as a shear type flexible building with lateral degree-of-freedom at each floor. The force-deformation behaviour of the isolation system is modelled by the bi-linear behaviour which can be effectively used to model all isolation systems in practice. The governing equations of motion of the isolated structural system are derived. The response of the system is obtained numerically by step-by-method under three real recorded earthquake motions and pulse motions associated in the near-fault earthquake motion. The variation of the top floor acceleration, interstory drift, base shear and bearing displacement of the isolated building is studied under different initial stiffness of the bi-linear isolation system. It was observed that the high initial stiffness of the isolation system excites higher modes in base-isolated structure and generate floor accelerations and story drift. Such behaviour of the base-isolated building especially supported on sliding type of isolation systems can be detrimental to sensitive equipment installed in the building. On the other hand, the bearing displacement and base shear found to reduce marginally with the increase of the initial stiffness of the initial stiffness of the isolation system. Further, the above behaviour of the base-isolated building was observed for different parameters of the bearing (i.e. post-yield stiffness and characteristic strength) and earthquake motions (i.e. real time history as well as pulse type motion).

Keywords: base isolation, base shear, bi-linear, earthquake, floor accelerations, inter-story drift, multi-story building, pulsemotion, stiffness ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2298
35 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Authors: Florin Pop

Abstract:

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
34 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.

Keywords: Direct search, DFIG, equivalent circuit parameters, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 899
33 Frank Norris’ McTeague: An Entropic Melodrama

Authors: Mohsen Masoomi, Fazel Asadi Amjad, Monireh Arvin

Abstract:

According to Naturalistic principles, human destiny in the form of blind chance and determinism, entraps the individual, so man is a defenceless creature unable to escape from the ruthless paws of a stoical universe. In Naturalism; nonetheless, melodrama mirrors a conscious alternative with a peculiar function. A typical American Naturalistic character thus cannot be a subject for social criticism of American society since they are not victims of the ongoing virtual slavery, capitalist system, nor of a ruined milieu, but of their own volition, and more importantly, their character frailty. Through a Postmodern viewpoint, each Naturalistic work can encompass some entropic trends and changes culminating in an entire failure and devastation. Frank Norris in McTeague displays the futile struggles of ordinary men and how they end up brutes. McTeague encompasses intoxication, abuse, violation, and ruthless homicides. Norris’ depictions of the falling individual as a demon represent the entropic dimension of Naturalistic novels. McTeague’s defeat is somewhat his own fault, the result of his own blunders and resolution, not the result of sheer accident. Throughout the novel, each character is a kind of insane quester indicating McTeague’s decadence and, by inference, the decadence of Western civilisation. McTeague seems to designate Norris’ solicitude for a community fabricated by the elements of human negative demeanours and conducts hauling acute symptoms of infectious dehumanisation. The aim of this article is to illustrate how one specific negative human disposition gradually, like a running fire, can spread everywhere and burn everything in itself. The author applies the concept of entropy metaphorically to describe the individual devolutions that necessarily comprise community entropy in McTeague, a dying universe.

Keywords: Animal imagery, entropy, Gypsy, melodrama.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
32 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: 5G, millimetre wavebands, super high-frequency band, SINR, signal-to-interference-plus-noise ratio, cost benefit analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
31 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: Fashion, infringement, Blockchain, artificial intelligence, textiles supply.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1226
30 Evaluation of the IMERG Product Performance at Estimating the Rainfall Properties in a Semi-Arid Region of Mexico

Authors: Eric Muñoz de la Torre, Julián González Trinidad, Efrén González Ramírez

Abstract:

Rain varies greatly in its duration, intensity, and spatial coverage, it is important to have sub-daily rainfall data for various applications, including risk prevention, however, the ground measurements are limited by the low and irregular density of rain gauges. An alternative to this problem is the Satellite Precipitation Products (SPPs) that use passive microwave and infrared sensors to estimate rainfall, as IMERG, however, these SPPs have to be validated before their application. The aim of this study is to evaluate the performance of the IMERG: Integrated Multi-satellitE Retrievals for Global Precipitation Measurement final run V06B SPP in a semi-arid region of Mexico, using four rain gauges sub-daily data of October 2019 and June to September 2021, using the Minimum inter-event Time (MIT) criterion to separate unique rain events with a dry period of 10 hrs for the purpose of evaluating the rainfall properties (depth, duration and intensity). Point to pixel analysis, continuous, categorical, and volumetric statistical metrics were used. Results show that IMERG is capable to estimate the rainfall depth with a slight overestimation but is unable to identify the real duration and intensity of the rain events, showing moderate overestimations and underestimations, respectively. The study zone presented 80 to 85% of convective rain events, the rest were stratiform rain events, classified by the depth magnitude variation of IMERG pixels and rain gauges. IMERG showed poorer performance at detecting the first ones but had a good performance at estimating stratiform rain events that are originated by Cold Fronts.

Keywords: IMERG, rainfall, rain gauge, remote sensing, statistical evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47
29 A Settlement Strategy for Health Facilities in Emerging Countries: A Case Study in Brazil

Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Piero Favino, Luca Preis

Abstract:

A settlement strategy is to anticipate and respond the needs of existing and future communities through the provision of primary health care facilities in marginalized areas. Access to a health care network is important to improving healthcare coverage, often lacking, in developing countries. The study explores that a good sanitary system strategy of rural contexts brings advantages to an existing settlement: improving transport, communication, water and social facilities. The objective of this paper is to define a possible methodology to implement primary health care facilities in disadvantaged areas of emerging countries. In this research, we analyze the case study of Lauro de Freitas, a municipality in the Brazilian state of Bahia, part of the Metropolitan Region of Salvador, with an area of 57,662 km² and 194.641 inhabitants. The health localization system in Lauro de Freitas is an integrated process that involves not only geographical aspects, but also a set of factors: population density, epidemiological data, allocation of services, road networks, and more. Data were collected also using semi-structured interviews and questionnaires to the local population. Synthesized data suggest that moving away from the coast where there is the greatest concentration of population and services, a network of primary health care facilities is able to improve the living conditions of small-dispersed communities. Based on the health service needs of populations, we have developed a methodological approach that is particularly useful in rural and remote contexts in emerging countries.

Keywords: Primary health care, developing countries, policy health planning, settlement strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
28 The Nexus between Migration and Human Security: The Case of Ethiopian Female Migration to Sudan

Authors: Anwar Hassen Tsega

Abstract:

International labor migration is an integral part of the modern globalized world. However, the phenomenon has its roots in some earlier periods in human history. This paper discusses the relatively new phenomenon of female migration in Africa. In the past, African women migrants were only spouses or dependent family members. But as modernity swept most African societies, with rising unemployment rates, there is evidence everywhere in Africa that women labor migration is a growing phenomenon that deserves to be understood in the context of human security research. This work explores these issues further, focusing on the experience of Ethiopian women labor migrants to Sudan. The migration of Ethiopian people to Sudan is historical; nevertheless, labor migration mainly started since the discovery and subsequent exploration of oil in the Sudan. While the paper is concerned with the human security aspect of the migrant workers, we need to be certain that the migration process will provide with a decent wage, good working conditions, the necessary social security coverage, and labor protection as a whole. However, migration to Sudan is not always safe and female migrants become subject to violence at the hands of brokers, employers and migration officials. For this matter, the paper argued that identifying the vulnerable stages and major problem facing female migrant workers at various stages of migration is a prerequisite to combat the problem and secure the lives of the migrant workers. The major problems female migrants face include extra degrees of gender-based violence, underpayment, various forms of abuse like verbal, physical and sexual and other forms of torture which include beating and slaps. This peculiar situation could be attributed to the fact that most of these women are irregular migrants and fall under the category of unskilled and/or illiterate migrants.

Keywords: Labor migration, human security, trafficking, smuggling, Ethiopia, Sudan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
27 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes

Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany

Abstract:

In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.

Keywords: Strongback System, Near-fault, Seismic fragility, Uncertainty, IDA, Probabilistic performance assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
26 Achievements of Healthcare Services Vis-À-Vis the Millennium Development Goals Targets: Evidence from Pakistan

Authors: Saeeda Batool, Ather Maqsood Ahmed

Abstract:

This study investigates the impact of public healthcare facilities and socio-economic circumstances on the status of child health in Pakistan. The complete analysis is carried out in correspondence with fourth and sixth millennium development goals. Further, the health variables chosen are also inherited from targeted indicators of the mentioned goals (MDGs). Trends in the Human Opportunity Index (HOI) for both health inequalities and coverage are analyzed using the Pakistan Social and Living Standards Measurement (PLSM) data set for 2001-02 to 2012-13 at the national and provincial level. To reveal the relative importance of each circumstance in achieving the targeted values for child health, Shorrocks decomposition is applied on HOI. The annual point average growth rate of HOI is used to simulate the time period for the achievement of target set by MDGs and universal access also. The results indicate an improvement in HOI for a reduction in child mortality rates from 52.1% in 2001-02 to 67.3% in 2012-13, which confirms the availability of healthcare opportunities to a larger segment of society. Similarly, immunization against measles and other diseases such as Diphtheria, Polio, Bacillus Calmette-Guerin (BCG), and Hepatitis has also registered an improvement from 51.6% to 69.9% during the period of study at the national level. On a positive note, no gender disparity has been found for child health indicators and that health outcome is mostly affected by the parental and geographical features and availability of health infrastructure. However, the study finds that this achievement has been uneven across provinces. Pakistan is not only lagging behind in achieving its health goals, disappointingly with the current rate of health care provision, but it will take many additional years to achieve its targets.

Keywords: Socio-economic circumstances, unmet MDGs, public healthcare services, child and infant mortality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
25 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations

Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam

Abstract:

When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.

Keywords: Acid treatment, carbonate, diversion, sandstone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4042
24 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and Weight on Bit (WOB) used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036 m3/h and -2.374 m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. The best combination of funnel viscosity, final shear force and drilling time is obtained through quantitative calculation. The minimum loss rate of lost circulation wells in Shunbei area is 10 m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: Drilling fluid, loss rate, main controlling factors, Unmanned Intervention Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 381
23 Ports and Airports: Gateways to Vector-Borne Diseases in Portugal Mainland

Authors: Maria C. Proença, Maria T. Rebelo, Maria J. Alves, Sofia Cunha

Abstract:

Vector-borne diseases are transmitted to humans by mosquitos, sandflies, bugs, ticks, and other vectors. Some are re-transmitted between vectors, if the infected human has a new contact when his levels of infection are high. The vector is infected for lifetime and can transmit infectious diseases not only between humans but also from animals to humans. Some vector borne diseases are very disabling and globally account for more than one million deaths worldwide. The mosquitoes from the complex Culex pipiens sl. are the most abundant in Portugal, and we dispose in this moment of a data set from the surveillance program that has been carried on since 2006 across the country. All mosquitos’ species are included, but the large coverage of Culex pipiens sl. and its importance for public health make this vector an interesting candidate to assess risk of disease amplification. This work focus on ports and airports identified as key areas of high density of vectors. Mosquitoes being ectothermic organisms, the main factor for vector survival and pathogen development is temperature. Minima and maxima local air temperatures for each area of interest are averaged by month from data gathered on a daily basis at the national network of meteorological stations, and interpolated in a geographic information system (GIS). The range of temperatures ideal for several pathogens are known and this work shows how to use it with the meteorological data in each port and airport facility, to focus an efficient implementation of countermeasures and reduce simultaneously risk transmission and mitigation costs. The results show an increased alert with decreasing latitude, which corresponds to higher minimum and maximum temperatures and a lower amplitude range of the daily temperature.

Keywords: Human health, risk assessment, risk management, vector-borne diseases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
22 Verifying the Supremacy of Volume Modulated Arc Therapy Over Intensity Modulated Radiation Therapy: Pelvis Malignancies’ Perspective

Authors: M. Umar Farooq, T. Ahmad Afridi, M. Zia-Ul-Islam Arsalan, U. Hussain Haider, S. Ullah

Abstract:

Cancer, a leading fatal disease worldwide, can be treated with various techniques including radiation therapy. It involves the use of ionizing radiation to target cancer cells. On basis of source placement, radiation therapy is of two types i.e., Brachytherapy and External Beam Radiotherapy (EBRT). EBRT has evolved from 2-D conventional therapy to 3-D Conformal radiotherapy (3D-CRT) and then Intensity-Modulated Radiotherapy (IMRT). IMRT improves dose conformity and sparing of organs at risk. Volumetric Modulated Arc Therapy (VMAT) is a modern technique that uses treatment delivery in arcs with rotation of the gantry. In this report, a dosimetry comparison was performed between IMRT and VMAT. This study was conducted in the Radiotherapy Department of the Institute of Nuclear Medicine and Oncology Lahore (INMOL). Ten patients with Prostate Carcinoma were selected for this study to compare the methods. Simulation of these patients was done with help of a CT Simulator. All target volumes and organs were delineated by the oncologists. Then suitable fields/arcs were applied which cover volumes effectively. This was followed by the optimization of plans for both techniques for every patient. Finally, a comparison of evaluating parameters e.g., Conformity Index (CI), Volume Coverage, Homogeneity Index (HI), Organ Doses, and MUs (Monitor Units) was performed. We obtained better results of target conformity indices from VMAT (CI = 1.16) than IMRT (CI = 1.24). VMAT was better in organ sparing too. Also, VMAT shows fewer MUs (733 MUs) as compared to IMRT (2149 MUs). From this study, it is concluded that VMAT is a better treatment technique than IMRT. This technique will enhance treatment efficiency as it takes less time in obtaining the required results. Also, a very less scatter dose will be delivered to the patient.

Keywords: 2-D Conventional Radiotherapy, 3-D Conformal Radiotherapy, Intensity Modulated Radiotherapy, Prostate Carcinoma, Radiotherapy, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359