Search results for: Multiple Input Multiple Outputs (MIMO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2919

Search results for: Multiple Input Multiple Outputs (MIMO)

39 A Framework for an Automated Decision Support System for Selecting Safety-Conscious Contractors

Authors: Rawan A. Abdelrazeq, Ahmed M. Khalafallah, Nabil A. Kartam

Abstract:

Selection of competent contractors for construction projects is usually accomplished through competitive bidding or negotiated contracting in which the contract bid price is the basic criterion for selection. The evaluation of contractor’s safety performance is still not a typical criterion in the selection process, despite the existence of various safety prequalification procedures. There is a critical need for practical and automated systems that enable owners and decision makers to evaluate contractor safety performance, among other important contractor selection criteria. These systems should ultimately favor safety-conscious contractors to be selected by the virtue of their past good safety records and current safety programs. This paper presents an exploratory sequential mixed-methods approach to develop a framework for an automated decision support system that evaluates contractor safety performance based on a multitude of indicators and metrics that have been identified through a comprehensive review of construction safety research, and a survey distributed to domain experts. The framework is developed in three phases: (1) determining the indicators that depict contractor current and past safety performance; (2) soliciting input from construction safety experts regarding the identified indicators, their metrics, and relative significance; and (3) designing a decision support system using relational database models to integrate the identified indicators and metrics into a system that assesses and rates the safety performance of contractors. The proposed automated system is expected to hold several advantages including: (1) reducing the likelihood of selecting contractors with poor safety records; (2) enhancing the odds of completing the project safely; and (3) encouraging contractors to exert more efforts to improve their safety performance and practices in order to increase their bid winning opportunities which can lead to significant safety improvements in the construction industry. This should prove useful to decision makers and researchers, alike, and should help improve the safety record of the construction industry.

Keywords: Construction safety, contractor selection, decision support system, relational database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
38 Effect of Good Agriculture Management Practices and Constraints on Grape Farming: A Case Study in Mirbachakot, Kalakan and Shakardara Districts Kabul, Afghanistan

Authors: Mohammad Mirwais Yusufi

Abstract:

Skillful management is one of the most important success factors for today’s farms. When a farm is well managed, it can generate funds for its sustainability. Grape is one of the most diffused fruits in the world and one of the most important cash crops with high potential of production in Afghanistan as well. While there are several organizations intervening for improvement of this cash crop, the quality and quantity are still not satisfactory for producers and external markets. The situation has not changed over the years. Therefore, a survey was conducted in 2017 with 60 grape growers, supported by questionnaires in Mirbachakot, Kalakan and Shakardara districts of Kabul province. The purpose was to get an understanding of the current socio-demographic characteristics of farmers, management methods, constraints, farm size, yield and contribution of grape farming to household income. Findings indicate that grape farming was predominant 83.3% male, 16.6% female and small-scale farmers were the main grape producers, 60% < 1 ha of land under grape production. Likewise, 50% had more than > 10 years and 33.3% between 1-5 years’ experience in grape farming. The high level of illiteracy and diseases had significant digit effect on growth, yield and quality of grapes. The results showed that vineyard management operations to protect grapes from mechanical damage are very poor or completely absent. Comparing developed countries, table grape is one of the fruits with the highest input of technology, while in developing countries the cost of labor is low but the purchase of the equipment is very high due to financial situation. Hence the low quality and quantity of grape are influenced by poor management methods, such as non-availability of experts and lack of technical guidance in the study site. Thereby, the study suggested that improved agricultural extension services and managerial skills could contribute to addressing the problems.

Keywords: Efficient resources use, management skills, constraints factors, Kabul.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
37 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second

Authors: P. V. Pramila, V. Mahesh

Abstract:

Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients resulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects) with the aforementioned input features. It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, as well as yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.

Keywords: FEV1, Multivariate Adaptive Regression Splines Pulmonary Function Test, Random Forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3738
36 Some Studies on Temperature Distribution Modeling of Laser Butt Welding of AISI 304 Stainless Steel Sheets

Authors: N. Siva Shanmugam, G. Buvanashekaran, K. Sankaranarayanasamy

Abstract:

In this research work, investigations are carried out on Continuous Wave (CW) Nd:YAG laser welding system after preliminary experimentation to understand the influencing parameters associated with laser welding of AISI 304. The experimental procedure involves a series of laser welding trials on AISI 304 stainless steel sheets with various combinations of process parameters like beam power, beam incident angle and beam incident angle. An industrial 2 kW CW Nd:YAG laser system, available at Welding Research Institute (WRI), BHEL Tiruchirappalli, is used for conducting the welding trials for this research. After proper tuning of laser beam, laser welding experiments are conducted on AISI 304 grade sheets to evaluate the influence of various input parameters on weld bead geometry i.e. bead width (BW) and depth of penetration (DOP). From the laser welding results, it is noticed that the beam power and welding speed are the two influencing parameters on depth and width of the bead. Three dimensional finite element simulation of high density heat source have been performed for laser welding technique using finite element code ANSYS for predicting the temperature profile of laser beam heat source on AISI 304 stainless steel sheets. The temperature dependent material properties for AISI 304 stainless steel are taken into account in the simulation, which has a great influence in computing the temperature profiles. The latent heat of fusion is considered by the thermal enthalpy of material for calculation of phase transition problem. A Gaussian distribution of heat flux using a moving heat source with a conical shape is used for analyzing the temperature profiles. Experimental and simulated values for weld bead profiles are analyzed for stainless steel material for different beam power, welding speed and beam incident angle. The results obtained from the simulation are compared with those from the experimental data and it is observed that the results of numerical analysis (FEM) are in good agreement with experimental results, with an overall percentage of error estimated to be within ±6%.

Keywords: Laser welding, Butt weld, 304 SS, FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4987
35 Design and Modeling of Human Middle Ear for Harmonic Response Analysis

Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey

Abstract:

The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.

Keywords: Computed tomography, human middle ear, harmonic response, pathologies, tympanic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
34 Submicron Laser-Induced Dot, Ripple and Wrinkle Structures and Their Applications

Authors: P. Slepicka, N. Slepickova Kasalkova, I. Michaljanicova, O. Nedela, Z. Kolska, V. Svorcik

Abstract:

Polymers exposed to laser or plasma treatment or modified with different wet methods which enable the introduction of nanoparticles or biologically active species, such as amino-acids, may find many applications both as biocompatible or anti-bacterial materials or on the contrary, can be applied for a decrease in the number of cells on the treated surface which opens application in single cell units. For the experiments, two types of materials were chosen, a representative of non-biodegradable polymers, polyethersulphone (PES) and polyhydroxybutyrate (PHB) as biodegradable material. Exposure of solid substrate to laser well below the ablation threshold can lead to formation of various surface structures. The ripples have a period roughly comparable to the wavelength of the incident laser radiation, and their dimensions depend on many factors, such as chemical composition of the polymer substrate, laser wavelength and the angle of incidence. On the contrary, biopolymers may significantly change their surface roughness and thus influence cell compatibility. The focus was on the surface treatment of PES and PHB by pulse excimer KrF laser with wavelength of 248 nm. The changes of physicochemical properties, surface morphology, surface chemistry and ablation of exposed polymers were studied both for PES and PHB. Several analytical methods involving atomic force microscopy, gravimetry, scanning electron microscopy and others were used for the analysis of the treated surface. It was found that the combination of certain input parameters leads not only to the formation of optimal narrow pattern, but to the combination of a ripple and a wrinkle-like structure, which could be an optimal candidate for cell attachment. The interaction of different types of cells and their interactions with the laser exposed surface were studied. It was found that laser treatment contributes as a major factor for wettability/contact angle change. The combination of optimal laser energy and pulse number was used for the construction of a surface with an anti-cellular response. Due to the simple laser treatment, we were able to prepare a biopolymer surface with higher roughness and thus significantly influence the area of growth of different types of cells (U-2 OS cells).

Keywords: Polymer treatment, laser, periodic pattern, cell response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
33 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. This is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that is based on controlintegrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. This paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. It starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art of pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general posedependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: Dynamic behavior, lightweight, machine tool, pose-dependency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2845
32 The Experimental and Numerical Analysis of the Joining Processes for Air Conditioning Systems

Authors: M.St. Węglowski, D. Miara, S. Błacha, J. Dworak, J. Rykała, K. Kwieciński, J. Pikuła, G. Ziobro, A. Szafron, P. Zimierska-Nowak, M. Richert, P. Noga

Abstract:

In the paper the results of welding of car’s air-conditioning elements are presented. These systems based on, mainly, the environmental unfriendly refrigerants. Thus, the producers of cars will have to stop using traditional refrigerant and to change it to carbon dioxide (R744). This refrigerant is environmental friendly. However, it should be noted that the air condition system working with R744 refrigerant operates at high temperature (up to 150 °C) and high pressure (up to 130 bar). These two parameters are much higher than for other refrigerants. Thus new materials, design as well as joining technologies are strongly needed for these systems. AISI 304 and 316L steels as well as aluminium alloys 5xxx are ranked among the prospective materials. As a joining process laser welding, plasma welding, electron beam welding as well as high rotary friction welding can be applied. In the study, the metallographic examination based on light microscopy as well as SEM was applied to estimate the quality of welded joints. The analysis of welding was supported by numerical modelling based on Sysweld software. The results indicated that using laser, plasma and electron beam welding, it is possible to obtain proper quality of welds in stainless steel. Moreover, high rotary friction welding allows to guarantee the metallic continuity in the aluminium welded area. The metallographic examination revealed that the grain growth in the heat affected zone (HAZ) in laser and electron beam welded joints were not observed. It is due to low heat input and short welding time. The grain growth and subgrains can be observed at room temperature when the solidification mode is austenitic. This caused low microstructural changes during solidification. The columnar grain structure was found in the weld metal. Meanwhile, the equiaxed grains were detected in the interface. The numerical modelling of laser welding process allowed to estimate the temperature profile in the welded joint as well as predicts the dimensions of welds. The agreement between FEM analysis and experimental data was achieved.  

Keywords: Car’s air–conditioning, microstructure, numerical modelling, welding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
31 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: Air flow, biomass combustion, feedback control system, fuel feeding, ladder logic, programmable logic controller, temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
30 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
29 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero

Abstract:

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
28 Achieving Implementable Nature-Based Solutions While Reshaping Architectural Education: A Case Study of URBiNAT and BUILD Solutions

Authors: C. Farinea, A. Conserva, F. Demeur

Abstract:

Nature has often been something humans have fought against. However, with the changing climate and urban challenges such as air pollution and food shortages, to name but a few, it has never been more crucial to work with nature to find solutions that can help us to adapt to the current planetary situation and mitigate the challenges that we will continue to face in the future. Nature-based solutions (NBS) have been gaining ground as one strategy that can help to create more sustainable solutions for our planet and simultaneously, provide several ecosystem services. As designers, there are a lot of insights that can be extracted and gained from nature. However, nature is a complex and sometimes difficult to predict system and its implementation in cities requires a multidisciplinary knowledge. To keep up with the solutions and prepare the future generations of architects and designers with the skills to be able to implement NBS, educational systems also have to adapt with the times. Architecture is no longer solely about drawing buildings with beautiful forms. It is no longer discipline bound. With the input from different disciplines, the implementation of NBS can be significantly more successful. Transdisciplinary strategies can encourage architects and designers to think beyond their discipline, and ensure the success and realization of the NBS. The paper will demonstrate how transdisciplinary teaching methodologies, including also taking part in participatory processes with experts intended as gathering local knowledge, can be implemented with architectural master students to achieve implementable NBS. Through two projects co-funded by the European Union, strategies such as participatory co-design and transdisciplinary start-ups were implemented into seminars that focused on the development of NBS with a transdisciplinary approach. Within the “Design with Living Systems” seminar, students took part in participatory co-design strategies with experts to design solutions that will be implemented in Porto as part of a healthy corridor, and that respond to the needs of the users and site. On the other hand, within the “Design for Living Systems” seminar, the transdisciplinary start-up approach created start-ups with students of architecture, business and biology focusing on identifying a problem and designing a NBS as a product. Both seminars proved to be successful in achieving implementable NBS through strategies of transdisciplinary education and gave the students the skill sets to be able to work with nature in their future careers.

Keywords: Architectural higher education, digital fabrication, nature-based solutions, transdisciplinary approaches.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147
27 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
26 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures

Authors: Tomoko Fukuyama, Osamu Senbu

Abstract:

Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.

Keywords: Prestressed concrete, electric charge, impedance, phase shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
25 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding

Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi

Abstract:

A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.

Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
24 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools

Authors: Raymond K. Jonkers

Abstract:

The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.

Keywords: Outcome based management, performance management, lifecycle costs, balanced scorecard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
23 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
22 Contributions of Natural and Human Activities to Urban Surface Runoff with Different Hydrological Scenarios (Orléans, France)

Authors: Mohammed Al-Juhaishi, Mikael Motelica-Heino, Fabrice Muller, Audrey Guirimand-Dufour, Christian Défarge

Abstract:

This study aims at improving the urban hydrological cycle of the Orléans agglomeration (France) and understanding the relationship between physical and chemical parameters of urban surface runoff and the hydrological conditions. In particular water quality parameters such as pH, conductivity, total dissolved solids, major dissolved cations and anions, and chemical and biological oxygen demands were monitored for three types of urban water discharges (wastewater treatment plant output (WWTP), storm overflow and stormwater outfall) under two hydrologic scenarios (dry and wet weather). The first results were obtained over a period of five months. Each investigated (Ormes, l’Egoutier and La Corne) outfall represents an urban runoff source that receives water from runoff roads, gutters, the irrigation of gardens and other sources of flow over the Earth’s surface that drains in its catchments and carries it to the Loire River. In wet weather conditions there is rain water runoff and an additional input from the roof gutters that have entered the stormwater system during rainfall. For the comparison the results La Chilesse is a storm overflow that was selected in our study as a potential source of waste water which is located before the (WWTP). The comparison of the physical-chemical parameters (total dissolved solids, turbidity, pH, conductivity, dissolved organic carbon (DOC), concentration of major cations and anions) together with the chemical oxygen demand (COD) and biological oxygen demand (BOD) helped to characterize sources of runoff waters in the different watersheds. It also helped to highlight the infiltration of wastewater in some stormwater systems that reject directly in the Loire River. The values of the conductivity measured in the outflow of Ormes were always higher than those measured in the other two outlets. The results showed a temporal variation for the Ormes outfall of conductivity from 1465 μS cm-1 in the dry weather flow to 650 μS cm-1 in the wet weather flow and also a spatial variation in the wet weather flow from 650 μS cm-1 in the Ormes outfall to 281 μS cm-1 in L’Egouttier outfall. The ultimate BOD (BOD28) showed a significant decrease in La Corne outfall from 181 mg L-1 in the wet weather flow to 95 mg L-1 in the dry weather flow because of the nutrient load that was transported by the runoff.

Keywords: BOD, COD, the Loire River, urban hydrology, urban dry and wet weather discharges, macronutrients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3269
21 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: Data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
20 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: Artificial neural network, low series manufacturing, polymer cutting, setup period estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
19 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824
18 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change

Authors: Moustafa Osman Mohammed

Abstract:

Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.

Keywords: Sustainability, constructions ecology, composite structure model, design structure matrix, environmental impact assessment, life cycle analysis, climate change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
17 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
16 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two wellknown scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a casestudy. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means, which allows simulating the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With this model is possible to obtain quite accurate and reliable results that allow identifying effective combinations building-HVAC system. The second step has consisted of using output data obtained as input to the calculation model, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing determining the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while our calculation model provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model for different design options.

Keywords: Energy, Buildings, Systems, Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029
15 Power and Delay Optimized Graph Representation for Combinational Logic Circuits

Authors: Padmanabhan Balasubramanian, Karthik Anantha

Abstract:

Structural representation and technology mapping of a Boolean function is an important problem in the design of nonregenerative digital logic circuits (also called combinational logic circuits). Library aware function manipulation offers a solution to this problem. Compact multi-level representation of binary networks, based on simple circuit structures, such as AND-Inverter Graphs (AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter Graphs, Reduced Boolean Circuits [8] does exist in literature. In this work, we discuss a novel and efficient graph realization for combinational logic circuits, represented using a NAND-NOR-Inverter Graph (NNIG), which is composed of only two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells. The networks are constructed on the basis of irredundant disjunctive and conjunctive normal forms, after factoring, comprising terms with minimum support. Construction of a NNIG for a non-regenerative function in normal form would be straightforward, whereas for the complementary phase, it would be developed by considering a virtual instance of the function. However, the choice of best NNIG for a given function would be based upon literal count, cell count and DAG node count of the implementation at the technology independent stage. In case of a tie, the final decision would be made after extracting the physical design parameters. We have considered AIG representation for reduced disjunctive normal form and the best of OIG/AOG/AOIG for the minimized conjunctive normal forms. This is necessitated due to the nature of certain functions, such as Achilles- heel functions. NNIGs are found to exhibit 3.97% lesser node count compared to AIGs and OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells than AIGs and OIG/AOG/AOIGs for the various samples considered. We compare the power efficiency and delay improvement achieved by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for various case studies. In comparison with functionally equivalent, irredundant and compact AIGs, NNIGs report mean savings in power and delay of 43.71% and 25.85% respectively, after technology mapping with a 0.35 micron TSMC CMOS process. For a comparison with OIG/AOG/AOIGs, NNIGs demonstrate average savings in power and delay by 47.51% and 24.83%. With respect to device count needed for implementation with static CMOS logic style, NNIGs utilize 37.85% and 33.95% lesser transistors than their AIG and OIG/AOG/AOIG counterparts.

Keywords: AND-Inverter Graph, OR-Inverter Graph, DirectedAcyclic Graph, Low power design, Delay optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
14 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
13 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: Marine sedimentology, seabed map, sediment classification, World Ocean.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
12 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.

Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 381
11 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: Air distribution, CFD, DOE, energy consumption, larder cabinet, refrigeration, uniform temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 590
10 Humic Acid and Azadirachtin Derivatives for the Management of Crop Pests

Authors: R. S. Giraddi, C. M. Poleshi

Abstract:

Organic cultivation of crops is gaining importance consumer awareness towards pesticide residue free foodstuffs is increasing globally. This is also because of high costs of synthetic fertilizers and pesticides, making the conventional farming non-remunerative. In India, organic manures (such as vermicompost) are an important input in organic agriculture.  Though vermicompost obtained through earthworm and microbe-mediated processes is known to comprise most of the crop nutrients, but they are in small amounts thus necessitating enrichment of nutrients so that crop nourishment is complete. Another characteristic of organic manures is that the pest infestations are kept under check due to induced resistance put up by the crop plants. In the present investigation, deoiled neem cake containing azadirachtin, copper ore tailings (COT), a source of micro-nutrients and microbial consortia were added for enrichment of vermicompost. Neem cake is a by-product obtained during the process of oil extraction from neem plant seeds. Three enriched vermicompost blends were prepared using vermicompost (at 70, 65 and 60%), deoiled neem cake (25, 30 and 35%), microbial consortia and COTwastes (5%). Enriched vermicompost was thoroughly mixed, moistened (25+5%), packed and incubated for 15 days at room temperature. In the crop response studies, the field trials on chili (Capsicum annum var. longum) and soybean, (Glycine max cv JS 335) were conducted during Kharif 2015 at the Main Agricultural Research Station, UAS, Dharwad-Karnataka, India. The vermicompost blend enriched with neem cake (known to possess higher amounts of nutrients) and vermicompost were applied to the crops and at two dosages and at two intervals of crop cycle (at sowing and 30 days after sowing) as per the treatment plan along with 50% recommended dose of fertilizer (RDF). 10 plants selected randomly in each plot were studied for pest density and plant damage. At maturity, crops were harvested, and the yields were recorded as per the treatments, and the data were analyzed using appropriate statistical tools and procedures. In the crops, chili and soybean, crop nourishment with neem enriched vermicompost reduced insect density and plant damage significantly compared to other treatments. These treatments registered as much yield (16.7 to 19.9 q/ha) as that realized in conventional chemical control (18.2 q/ha) in soybean, while 72 to 77 q/ha of green chili was harvested in the same treatments, being comparable to the chemical control (74 q/ha). The yield superiority of the treatments was of the order neem enriched vermicompost>conventional chemical control>neem cake>vermicompost>untreated control.  The significant features of the result are that it reduces use of inorganic manures by 50% and synthetic chemical insecticides by 100%.

Keywords: Humic acid, azadirachtin, vermicompost, insect-pest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806