Search results for: Geometric inverse source problem
505 Investigating Iraqi EFL University Students' Productive Knowledge of Grammatical Collocations in English
Authors: Adnan Z. Mkhelif
Abstract:
Grammatical collocations (GCs) are word combinations containing a preposition or a grammatical structure, such as an infinitive (e.g. smile at, interested in, easy to learn, etc.). Such collocations tend to be difficult for Iraqi EFL university students (IUS) to master. To help address this problem, it is important to identify the factors causing it. This study aims at investigating the effects of L2 proficiency, frequency of GCs and their transparency on IUSs’ productive knowledge of GCs. The study involves 112 undergraduate participants with different proficiency levels, learning English in formal contexts in Iraq. The data collection instruments include (but not limited to) a productive knowledge test (designed by the researcher using the British National Corpus (BNC)), as well as the grammar part of the Oxford Placement Test (OPT). The study findings have shown that all the above-mentioned factors have significant effects on IUSs’ productive knowledge of GCs. In addition to establishing evidence of which factors of L2 learning might be relevant to learning GCs, it is hoped that the findings of the present study will contribute to more effective methods of teaching that can better address and help overcome the problems IUSs encounter in learning GCs. The study is thus hoped to have significant theoretical and pedagogical implications for researchers, syllabus designers as well as teachers of English as a foreign/second language.
Keywords: Corpus linguistics, frequency, grammatical collocations, L2 vocabulary learning, productive knowledge, proficiency, transparency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868504 Microbial Fuel Cells and Their Applications in Electricity Generating and Wastewater Treatment
Authors: Shima Fasahat
Abstract:
This research is an experimental research which was done about microbial fuel cells in order to study them for electricity generating and wastewater treatment. These days, it is very important to find new, clean and sustainable ways for energy supplying. Because of this reason there are many researchers around the world who are studying about new and sustainable energies. There are different ways to produce these kind of energies like: solar cells, wind turbines, geothermal energy, fuel cells and many other ways. Fuel cells have different types one of these types is microbial fuel cell. In this research, an MFC was built in order to study how it can be used for electricity generating and wastewater treatment. The microbial fuel cell which was used in this research is a reactor that has two tanks with a catalyst solution. The chemical reaction in microbial fuel cells is a redox reaction. The microbial fuel cell in this research is a two chamber MFC. Anode chamber is an anaerobic one (ABR reactor) and the other chamber is a cathode chamber. Anode chamber consists of stabilized sludge which is the source of microorganisms that do redox reaction. The main microorganisms here are: Propionibacterium and Clostridium. The electrodes of anode chamber are graphite pages. Cathode chamber consists of graphite page electrodes and catalysts like: O2, KMnO4 and C6N6FeK4. The membrane which separates the chambers is Nafion117. The reason of choosing this membrane is explained in the complete paper. The main goal of this research is to generate electricity and treating wastewater. It was found that when you use electron receptor compounds like: O2, MnO4, C6N6FeK4 the velocity of electron receiving speeds up and in a less time more current will be achieved. It was found that the best compounds for this purpose are compounds which have iron in their chemical formula. It is also important to pay attention to the amount of nutrients which enters to bacteria chamber. By adding extra nutrients in some cases the result will be reverse. By using ABR the amount of chemical oxidation demand reduces per day till it arrives to a stable amount.
Keywords: Anaerobic baffled reactor, bioenergy, electrode, energy efficient, microbial fuel cell, renewable chemicals, sustainable.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389503 Street Begging and Its Psychosocial Social Effects in Ibadan Metropolis, Oyo State, Nigeria
Authors: Temitope M. Ojo, Titilayo A. Benson
Abstract:
This study investigated street begging and its psychosocial effect in Ibadan Metropolis, Oyo State, Nigeria. In carrying out this study, four research questions were used. The instrument used for data collection was a face-to-face and self-developed questionnaire. The results revealed there is high awareness level on the causes of street begging among the respondents, who also mentioned several factors contributing to street begging. However, respondents disagreed that lack of education is a factor contributing to street begging in Nigeria. The psycho-social effects of street begging, as identified by the respondents, are development of inferiority complex, lack of social interaction, loss of self-respect and dignity, increased mindset of poverty and loss of self-confident. Solution to street begging as identified by the respondents also includes provision of rehabilitation centers, provision of food for students in Islamic schools and monthly survival allowance. Specific policies and other legislative frameworks are needed in terms of age, sex, disability, and family-related issues, to effectively address the begging problem. Therefore, it is recommended that policy planners must adopt multi-faceted, multi-targeted, and multi-tiered approaches if they are to have any impact on the lives of street beggars in all four categories. In this regard, both preventative and responsive interventions are needed instead of rehabilitative solutions for each category of street beggars.
Keywords: Beggars, begging, psychosocial effect, respondents, street begging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3707502 An Efficient Biometric Cryptosystem using Autocorrelators
Authors: R. Bremananth, A. Chitra
Abstract:
Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.Keywords: Autocorrelators, biometrics cryptography, irispatterns, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527501 Detection and Classification of Faults on Parallel Transmission Lines Using Wavelet Transform and Neural Network
Authors: V.S.Kale, S.R.Bhide, P.P.Bedekar, G.V.K.Mohan
Abstract:
The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.
Keywords: Artificial neural network, fault detection and classification, parallel transmission lines, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3011500 Expectation-Confirmation Model of Information System Continuance: A Meta-Analysis
Authors: Hui-Min Lai, Chin-Pin Chen, Yung-Fu Chang
Abstract:
The expectation-confirmation model (ECM) is one of the most widely used models for evaluating information system continuance, and this model has been extended to other study backgrounds, or expanded with other theoretical perspectives. However, combining ECM with other theories or investigating the background problem may produce some disparities, thus generating inaccurate conclusions. Habit is considered to be an important factor that influences the user’s continuance behavior. This paper thus critically examines seven pairs of relationships from the original ECM and the habit variable. A meta-analysis was used to tackle the development of ECM research over the last 10 years from a range of journals and conference papers published in 2005–2014. Forty-six journal articles and 19 conference papers were selected for analysis. The results confirm our prediction that a high effect size for the seven pairs of relationships was obtained (ranging from r=0.386 to r=0.588). Furthermore, a meta-analytic structural equation modeling was performed to simultaneously test all relationships. The results show that habit had a significant positive effect on continuance intention at p<=0.05 and that the six other pairs of relationships were significant at p<0.10. Based on the findings, we refined our original research model and an alternative model was proposed for understanding and predicting information system continuance. Some theoretical implications are also discussed.Keywords: Expectation-confirmation theory, expectation- confirmation model, meta-analysis, meta-analytic structural equation modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2730499 Effect on the Performance of the Nano-Particulate Graphite Lubricant in the Turning of AISI 1040 Steel under Variable Machining Conditions
Authors: S. Srikiran, Dharmala Venkata Padmaja, P. N. L. Pavani, R. Pola Rao, K. Ramji
Abstract:
Technological advancements in the development of cutting tools and coolant/lubricant chemistry have enhanced the machining capabilities of hard materials under higher machining conditions. Generation of high temperatures at the cutting zone during machining is one of the most important and pertinent problems which adversely affect the tool life and surface finish of the machined components. Generally, cutting fluids and solid lubricants are used to overcome the problem of heat generation, which is not effectively addressing the problems. With technological advancements in the field of tribology, nano-level particulate solid lubricants are being used nowadays in machining operations, especially in the areas of turning and grinding. The present investigation analyses the effect of using nano-particulate graphite powder as lubricant in the turning of AISI 1040 steel under variable machining conditions and to study its effect on cutting forces, tool temperature and surface roughness of the machined component. Experiments revealed that the increase in cutting forces and tool temperature resulting in the decrease of surface quality with the decrease in the size of nano-particulate graphite powder as lubricant.Keywords: Solid lubricant, graphite, minimum quantity lubrication, nanoparticles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943498 Finite Element Study on Corono-Radicular Restored Premolars
Authors: Sandu L., Topală F., Porojan S.
Abstract:
Restoration of endodontically treated teeth is a common problem in dentistry, related to the fractures occurring in such teeth and to concentration of forces little information regarding variation of basic preparation guidelines in stress distribution has been available. To date, there is still no agreement in the literature about which material or technique can optimally restore endodontically treated teeth. The aim of the present study was to evaluate the influence of the core height and restoration materials on corono-radicular restored upper first premolar. The first step of the study was to achieve 3D models in order to analyze teeth, dowel and core restorations and overlying full ceramic crowns. The FEM model was obtained by importing the solid model into ANSYS finite element analysis software. An occlusal load of 100 N was conducted, and stresses occurring in the restorations, and teeth structures were calculated. Numerical simulations provide a biomechanical explanation for stress distribution in prosthetic restored teeth. Within the limitations of the present study, it was found that the core height has no important influence on the stress generated in coronoradicular restored premolars. It can be drawn that the cervical regions of the teeth and restorations were subjected to the highest stress concentrations.Keywords: 3D models, finite element analysis, dowel and core restoration, full ceramic crown, premolars, structural simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2883497 Steady State Power Flow Calculations with STATCOM under Load Increase Scenario and Line Contingencies
Authors: A. S. Telang, P. P. Bedekar
Abstract:
Flexible AC transmission system controllers play an important role in controlling the line power flow and in improving voltage profiles of the power system network. They can be used to increase the reliability and efficiency of transmission and distribution system. The modeling of these FACTS controllers in power flow calculations have become a challenging research problem. This paper presents a simple and systematic approach for a steady state power flow calculations of power system with STATCOM (Static Synchronous Compensator). It shows how systematically STATCOM can be implemented in conventional power flow calculations. The main contribution of this paper is to investigate this approach for two special conditions i.e. consideration of load increase pattern incorporating load change (active, reactive and both active and reactive) at all load buses simultaneously and the line contingencies under such load change. Such investigation proves to be relevant for determination of strategy for the optimal placement of STATCOM to enhance the voltage stability. The performance has been evaluated on many standard IEEE test systems. The results for standard IEEE-30 bus test system are presented here.Keywords: Load flow analysis, Newton-Raphson (N-R) power flow, Flexible AC transmission system, FACTS, Static synchronous compensator, STATCOM, voltage profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166496 Accurate Visualization of Graphs of Functions of Two Real Variables
Authors: Zeitoun D. G., Thierry Dana-Picard
Abstract:
The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708495 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images
Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez
Abstract:
The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.
Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197494 A TIPSO-SVM Expert System for Efficient Classification of TSTO Surrogates
Authors: Ali Sarosh, Dong Yun-Feng, Muhammad Umer
Abstract:
Fully reusable spaceplanes do not exist as yet. This implies that design-qualification for optimized highly-integrated forebody-inlet configuration of booster-stage vehicle cannot be based on archival data of other spaceplanes. Therefore, this paper proposes a novel TIPSO-SVM expert system methodology. A non-trivial problem related to optimization and classification of hypersonic forebody-inlet configuration in conjunction with mass-model of the two-stage-to-orbit (TSTO) vehicle is solved. The hybrid-heuristic machine learning methodology is based on two-step improved particle swarm optimizer (TIPSO) algorithm and two-step support vector machine (SVM) data classification method. The efficacy of method is tested by first evolving an optimal configuration for hypersonic compression system using TIPSO algorithm; thereafter, classifying the results using two-step SVM method. In the first step extensive but non-classified mass-model training data for multiple optimized configurations is segregated and pre-classified for learning of SVM algorithm. In second step the TIPSO optimized mass-model data is classified using the SVM classification. Results showed remarkable improvement in configuration and mass-model along with sizing parameters.
Keywords: TIPSO-SVM expert system, TIPSO algorithm, two-step SVM method, aerothermodynamics, mass-modeling, TSTO vehicle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318493 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective
Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur
Abstract:
Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161492 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences
Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng
Abstract:
Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).
Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730491 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.
Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386490 UF as Pretreatment of RO for Tertiary Treatment of Biologically Treated Distillery Spentwash
Authors: Pinki Sharma, Himanshu Joshi
Abstract:
Distillery spentwash contains high chemical oxygen demand (COD), biological oxygen demand (BOD), color, total dissolved solids (TDS) and other contaminants even after biological treatment. The effluent can’t be discharged as such in the surface water bodies or land without further treatment. Reverse osmosis (RO) treatment plants have been installed in many of the distilleries at tertiary level in many of the distilleries in India, but are not properly working due to fouling problem which is caused by the presence of high concentration of organic matter and other contaminants in biologically treated spentwash. In order to make the membrane treatment a proven and reliable technology, proper pre-treatment is mandatory. In the present study, ultra-filtration (UF) for pretreatment of RO at tertiary stage has been performed. Operating parameters namely initial pH (pHo: 2–10), trans-membrane pressure (TMP: 4-20 bars) and temperature (T: 15-43°C) were used for conducting experiments with UF system. Experiments were optimized at different operating parameters in terms of COD, color, TDS and TOC removal by using response surface methodology (RSM) with central composite design. The results showed that removal of COD, color and TDS was 62%, 93.5% and 75.5% respectively, with UF, at optimized conditions with increased permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The performance of the RO system was greatly improved both in term of pollutant removal as well as water recovery.Keywords: Bio-digested distillery spentwash, reverse osmosis, Response surface methodology, ultra-filtration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2590489 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla
Abstract:
Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.Keywords: Fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840488 The Applicability of the Zipper Strut to Seismic Rehabilitation of Steel Structures
Authors: G. R. Nouri, H. Imani Kalesar, Zahra Ameli
Abstract:
Chevron frames (Inverted-V-braced frames or Vbraced frames) have seismic disadvantages, such as not good exhibit force redistribution capability and compression brace buckles immediately. Researchers developed new design provisions on increasing both the ductility and lateral resistance of these structures in seismic areas. One of these new methods is adding zipper columns, as proposed by Khatib et al. (1988) [2]. Zipper columns are vertical members connecting the intersection points of the braces above the first floor. In this paper applicability of the suspended zipper system to Seismic Rehabilitation of Steel Structures is investigated. The models are 3-, 6-, 9-, and 12-story Inverted-V-braced frames. In this case, it is assumed that the structures must be rehabilitated. For rehabilitation of structures, zipper column is used. The result of researches showed that the suspended zipper system is effective in case of 3-, 6-, and 9-story Inverted-V-braced frames and it would increase lateral resistance of structure up to life safety level. But in case of high-rise buildings (such as 12 story frame), it doesn-t show good performance. For solving this problem, the braced bay can consist of small “units" over the height of the entire structure, which each of them is a zipper-braced bay with a few stories. By using this method the lateral resistance of 12 story Inverted-V-braced frames is increased up to safety life level.Keywords: chevron-braced frames, suspended zipper frames, zipper frames, zipper columns
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222487 Impact of Disposed Drinking Water Sachets in Damaturu, Yobe State, Nigeria
Authors: Meeta Ratawa Tiwary
Abstract:
Damaturu is the capital of Yobe State in northeastern Nigeria where civic amenities and facilities are not adequate even after 24 years of its existence. The volatile security and political situations are most significant causes for the same. The basic facility for the citizens in terms of drinking water and electricity are not available. For the drinking water, they have to rely on personal boreholes or the filtered borehole waters available in packaged sachets in market. The present study is concerned with environmental impact of indiscriminate disposal of drinking synthetic polythene water sachets in Damaturu. The sachet water is popularly called as “pure water”, but its purity is questionable. Increased production and consumption of sachet water has led to indiscriminate dumping and disposal of empty sachets leading to serious environmental threat. The evidence of this is seen for sachets littering the streets and the drainages blocked by ‘blocks’ of water sachet waste. Sachet water gained much popularity in Nigeria because the product is convenient for use, affordable and economically viable. The present study aims to find out the solution to this environmental problem. The fieldbased study has found some significant factors that cause environmental and socio economic effect due to this. Some recommendations have been made based on research findings regarding sustainable waste management, recycling and re-use of the non-biodegradable products in society.Keywords: Civic amenities, non-biodegradable, pure water, sustainable environment, waste disposal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3504486 Application of GA Optimization in Analysis of Variable Stiffness Composites
Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani
Abstract:
Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.Keywords: Beam structures, layerwise, optimization, variable angle tow, neural network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652485 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting: The WICKED Method
Authors: S. Impey, D. Berry, S. Furtado, M. Galvin, L. Grogan, O. Hardiman, L. Hederman, M. Heverin, V. Wade, L. Douris, D. O'Sullivan, G. Stephens
Abstract:
Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.
Keywords: Healthcare, knowledge acquisition, maximal data sets, action design science.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542484 Optimum Surface Roughness Prediction in Face Milling of High Silicon Stainless Steel
Authors: M. Farahnakian, M.R. Razfar, S. Elhami-Joosheghan
Abstract:
This paper presents an approach for the determination of the optimal cutting parameters (spindle speed, feed rate, depth of cut and engagement) leading to minimum surface roughness in face milling of high silicon stainless steel by coupling neural network (NN) and Electromagnetism-like Algorithm (EM). In this regard, the advantages of statistical experimental design technique, experimental measurements, artificial neural network, and Electromagnetism-like optimization method are exploited in an integrated manner. To this end, numerous experiments on this stainless steel were conducted to obtain surface roughness values. A predictive model for surface roughness is created by using a back propogation neural network, then the optimization problem was solved by using EM optimization. Additional experiments were performed to validate optimum surface roughness value predicted by EM algorithm. It is clearly seen that a good agreement is observed between the predicted values by EM coupled with feed forward neural network and experimental measurements. The obtained results show that the EM algorithm coupled with back propogation neural network is an efficient and accurate method in approaching the global minimum of surface roughness in face milling.
Keywords: cutting parameters, face milling, surface roughness, artificial neural network, Electromagnetism-like algorithm,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2586483 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.
Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350482 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management
Authors: M. Graus, K. Westhoff, X. Xu
Abstract:
The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.
Keywords: Data analytics, green production, industrial energy management, optimization, renewable energies, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736481 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures
Abstract:
The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756480 The Response of Winter Wheat to Flooding
Authors: M. E. Ghobadi, M. Ghobadi, A. Zebarjadi
Abstract:
The effect of flooding can be a serious problem for wheat farmers, even at dry land condition. Amount of flooding damage depends on duration flooding, developmental stage, wheat type and variety. Therefore as a factorial experiment in randomized complete design based on winter bread wheat cultivars (Pishtaz, Marvdasht, Shiraz, Zarin, Shahriar, C-81-4, Sardari, Agosta seed, FGS and Azar2) at stages (Non- flooding stress, flooding at tillering and stem elongation stages for 15 days) carried out in Faculty of Agriculture, Razi University, Kermanshah, Iran. During flooding, soil environment of plant roots were water saturated. Analysis of variance showed that flooding had a significant effect on the number of grains per spike, grain weight per spike and a grain weight. Hence flooding reduces the number of grain per spike between 27.1 to 42.5 percent, grain weight per spike between 34.7 to 54.4 percent and single grain weight between 12.1 to 15.1 percent. Effects of flooding at the tillering stage reduced higher than stem elongation stage on studied traits. The result also showed that flooding at tillering stage delayed spikelet primordial and floret. Between wheat cultivars was significant for traits, but were different reactions. "Shiraz", "Zarin" and "Shahriar" had the most no. grain per spike, but "Zarin" and "Sardari" had the most grain weight per spike and single grain weight, respectively. Also, interaction between start of flooding and cultivar was significant.Keywords: Flooding, winter wheat, yield components
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2466479 3D Numerical Studies on Jets Acoustic Characteristics of Chevron Nozzles for Aerospace Applications
Authors: R. Kanmaniraja, R. Freshipali, J. Abdullah, K. Niranjan, K. Balasubramani, V. R. Sanal Kumar
Abstract:
The present environmental issues have made aircraft jet noise reduction a crucial problem in aero-acoustics research. Acoustic studies reveal that addition of chevrons to the nozzle reduces the sound pressure level reasonably with acceptable reduction in performance. In this paper comprehensive numerical studies on acoustic characteristics of different types of chevron nozzles have been carried out with non-reacting flows for the shape optimization of chevrons in supersonic nozzles for aerospace applications. The numerical studies have been carried out using a validated steady 3D density based, k-ε turbulence model. In this paper chevron with sharp edge, flat edge, round edge and U-type edge are selected for the jet acoustic characterization of supersonic nozzles. We observed that compared to the base model a case with round-shaped chevron nozzle could reduce 4.13% acoustic level with 0.6% thrust loss. We concluded that the prudent selection of the chevron shape will enable an appreciable reduction of the aircraft jet noise without compromising its overall performance. It is evident from the present numerical simulations that k-ε model can predict reasonably well the acoustic level of chevron supersonic nozzles for its shape optimization.
Keywords: Supersonic nozzle, Chevron, Acoustic level, Shape Optimization of Chevron Nozzles, Jet noise suppression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3821478 The Automated Soil Erosion Monitoring System (ASEMS)
Authors: George N. Zaimes, Valasia Iakovoglou, Paschalis Koutalakis, Konstantinos Ioannou, Ioannis Kosmadakis, Panagiotis Tsardaklis, Theodoros Laopoulos
Abstract:
The advancements in technology allow the development of a new system that can continuously measure surface soil erosion. Continuous soil erosion measurements are required in order to comprehend the erosional processes and propose effective and efficient conservation measures to mitigate surface erosion. Mitigating soil erosion, especially in Mediterranean countries such as Greece, is essential in order to maintain environmental and agricultural sustainability. In this paper, we present the Automated Soil Erosion Monitoring System (ASEMS) that measures surface soil erosion along with other factors that impact erosional process. Specifically, this system measures ground level changes (surface soil erosion), rainfall, air temperature, soil temperature, and soil moisture. Another important innovation is that the data will be collected by remote communication. In addition, stakeholder’s awareness is a key factor to help reduce any environmental problem. The different dissemination activities that were utilized are described. The overall outcomes were the development of a new innovative system that can measure erosion very accurately. These data from the system help study the process of erosion and find the best possible methods to reduce erosion. The dissemination activities enhance the stakeholders and public's awareness on surface soil erosion problems and will lead to the adoption of more effective soil erosion conservation practices in Greece.Keywords: Soil management, climate change, new technologies, conservation practices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2467477 Nine-Level Shunt Active Power Filter Associated with a Photovoltaic Array Coupled to the Electrical Distribution Network
Authors: Zahzouh Zoubir, Bouzaouit Azzeddine, Gahgah Mounir
Abstract:
The use of more and more electronic power switches with a nonlinear behavior generates non-sinusoidal currents in distribution networks, which causes damage to domestic and industrial equipment. The multi-level shunt power active filter is subsequently shown to be an adequate solution to the problem raised. Nevertheless, the difficulty of adjusting the active filter DC supply voltage requires another technology to ensure it. In this article, a photovoltaic generator is associated with the DC bus power terminals of the active filter. The proposed system consists of a field of solar panels, three multi-level voltage inverters connected to the power grid and a non-linear load consisting of a six-diode rectifier bridge supplying a resistive-inductive load. Current control techniques of active and reactive power are used to compensate for both harmonic currents and reactive power as well as to inject active solar power into the distribution network. An algorithm of the search method of the maximum power point of type Perturb and observe is applied. Simulation results of the system proposed under the MATLAB/Simulink environment shows that the performance of control commands that reassure the solar power injection in the network, harmonic current compensation and power factor correction.Keywords: MPPT, active power filter, PV array, perturb and observe algorithm, PWM-control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754476 Modelling Hydrological Time Series Using Wakeby Distribution
Authors: Ilaria Lucrezia Amerise
Abstract:
The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674