Search results for: Columns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 357

Search results for: Columns

57 Methylene Blue Removal Using NiO nanoparticles-Sand Adsorption Packed Bed

Authors: Nedal N. Marei, Nashaat Nassar

Abstract:

Many treatment techniques have been used to remove the soluble pollutants from wastewater as; dyes and metal ions which could be found in rich amount in the used water of the textile and tanneries industry. The effluents from these industries are complex, containing a wide variety of dyes and other contaminants, such as dispersants, acids, bases, salts, detergents, humectants, oxidants, and others. These techniques can be divided into physical, chemical, and biological methods. Adsorption has been developed as an efficient method for the removal of heavy metals from contaminated water and soil. It is now recognized as an effective method for the removal of both organic and inorganic pollutants from wastewaters. Nanosize materials are new functional materials, which offer high surface area and have come up as effective adsorbents. Nano alumina is one of the most important ceramic materials widely used as an electrical insulator, presenting exceptionally high resistance to chemical agents, as well as giving excellent performance as a catalyst for many chemical reactions, in microelectronic, membrane applications, and water and wastewater treatment. In this study, methylene blue (MB) dye has been used as model dye of textile wastewater in order to synthesize a synthetic MB wastewater. NiO nanoparticles were added in small percentage in the sand packed bed adsorption columns to remove the MB from the synthetic textile wastewater. Moreover, different parameters have been evaluated; flow of the synthetic wastewater, pH, height of the bed, percentage of the NiO to the sand in the packed material. Different mathematical models where employed to find the proper model which describe the experimental data and help to analyze the mechanism of the MB adsorption. This study will provide good understanding of the dyes adsorption using metal oxide nanoparticles in the classical sand bed.

Keywords: adsorption, column, nanoparticles, methylene

Procedia PDF Downloads 236
56 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 171
55 Vulnerability of Steel Moment-Frame Buildings with Pinned and, Alternatively, with Semi-Rigid Connections

Authors: Daniel Llanes, Alfredo Reyes, Sonia E. Ruiz, Federico Valenzuela Beltran

Abstract:

Steel frames have been used in building construction for more than one hundred years. Beam-column may be connected to columns using either stiffened or unstiffened angles at the top and bottom beam flanges. Designers often assume that these assemblies acted as “pinned” connections for gravity loads and that the stiffened connections would act as “fixed” connections for lateral loads. Observation of damages sustained by buildings during the 1994 Northridge earthquake indicated that, contrary to the intended behavior, in many cases, brittle fractures initiated within the connections at very low levels of plastic demand, and in some cases, while the structures remained essentially elastic. Due to the damage presented in these buildings other type of alternative connections have been proposed. According to a research funded by the Federal Emergency Management Agency (FEMA), the screwed connections have better performance when they are subjected to cyclic loads, but at the same time, these connections have some degree of flexibility. Due to this situation, some researchers ventured into the study of semi-rigid connections. In the present study three steel buildings, constituted by regular frames are analyzed. Two types of connections are considered: pinned and semi-rigid connections. With the aim to estimate their structural capacity, a number of incremental dynamic analyzes are performed. 3D structural models are used for the analyses. The seismic ground motions were recorded on sites near Los Angeles, California, where the structures are supposed to be located. The vulnerability curves of the building are obtained in terms of maximum inter-story drifts. The vulnerability curves (which correspond to the models with two different types of connections) are compared, and its implications on its structural design and performance is discussed.

Keywords: steel frame Buildings, vulnerability curves, semi-rigid connections, pinned connections

Procedia PDF Downloads 200
54 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.

Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number

Procedia PDF Downloads 335
53 Comparison of Various Landfill Ground Improvement Techniques for Redevelopment of Closed Landfills to Cater Transport Infrastructure

Authors: Michael D. Vinod, Hadi Khabbaz

Abstract:

Construction of infrastructure above or adjacent to landfills is becoming more common to capitalize on the limited space available within urban areas. However, development above landfills is a challenging task due to large voids, the presence of organic matter, heterogeneous nature of waste and ambiguity surrounding landfill settlement prediction. Prior to construction of infrastructure above landfills, ground improvement techniques are being employed to improve the geotechnical properties of landfill material. Although the ground improvement techniques have little impact on long term biodegradation and creep related landfill settlement, they have shown some notable short term success with a variety of techniques, including methods for verifying the level of effectiveness of ground improvement techniques. This paper provides geotechnical and landfill engineers a guideline for selection of landfill ground improvement techniques and their suitability to project-specific sites. Ground improvement methods assessed and compared in this paper include concrete injected columns (CIC), dynamic compaction, rapid impact compaction (RIC), preloading, high energy impact compaction (HEIC), vibro compaction, vibro replacement, chemical stabilization and the inclusion of geosynthetics such as geocells. For each ground improvement technique a summary of the existing theory, benefits, limitations, suitable modern ground improvement monitoring methods, the applicability of ground improvement techniques for landfills and supporting case studies are provided. The authors highlight the importance of implementing cost-effective monitoring techniques to allow observation and necessary remediation of the subsidence effects associated with long term landfill settlement. These ground improvement techniques are primarily for the purpose of construction above closed landfills to cater for transport infrastructure loading.

Keywords: closed landfills, ground improvement, monitoring, settlement, transport infrastructure

Procedia PDF Downloads 183
52 In Vitro Antibacterial Activity of Selected Tanzania Medicinal Plants

Authors: Mhuji Kilonzo, Patrick Ndakidemi, Musa Chacha

Abstract:

Objective: To evaluate antibacterial activity from four selected medicinal plants namely Mystroxylon aethiopicum, Lonchocarpus capassa, Albizia anthelmentica and Myrica salicifolia used for management of bacterial infection in Tanzania. Methods: Minimum Inhibitory Concentration (MIC) of plants extracts against the tested bacterial species was determined by using 96 wells microdilution method. In this method, 50 μL of nutrient broth were loaded in each well followed by 50 μL of extract (100 mg/mL) to make a final volume of 100 μL. Subsequently, 50 μL were transferred from first rows of each well to the second rows and the process was repeated down the columns to the last wells from which 50 μL were discarded. Thereafter, 50 μL of the selected bacterial suspension were added to each well thus making a final volume of 100 μL. The lowest concentration which showed no bacterial growth was considered as MIC. Results: It was revealed that L. capassa leaf ethyl acetate extract exhibited antibacterial activity against Salmonella kisarawe and Salmonella typhi with MIC values of 0.39 and 0.781 mg/mL respectively. Likewise, L. capassa root bark ethyl acetate extracts inhibited growth of S. typhi and E. coli with MIC values of 0.39 and 0.781 mg/mL respectively. The M. aethiopicum leaf and root bark chloroform extracts displayed antibacterial activity against S. kisarawe and S. typhi respectively with MIC value of 0.781 mg/mL. The M. salicifolia stem bark ethyl acetate exhibited antibacterial activity against P. aeruginosa with MIC value of 0.39 mg/mL whereas the methanolic stem and root bark of the same plant inhibited the growth of Proteus mirabilis and Klebsiella pneumoniae with MIC value of 0.781 mg/mL. Conclusion: It was concluded that M. aethiopicum, L. capassa, A. anthelmentica and M. salicifolia are potential source of antibacterial agents. Further studies to establish structures of antibacterial and evaluate active ingredients are recommended.

Keywords: Albizia anthelmentica, Lonchocarpus capassa, Mystroxylon aethiopicum, Myrica salicifolia

Procedia PDF Downloads 192
51 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.

Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit

Procedia PDF Downloads 117
50 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach

Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier

Abstract:

The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rᵐ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.

Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis

Procedia PDF Downloads 78
49 Computational Fluid Dynamics Modeling of Physical Mass Transfer of CO₂ by N₂O Analogy Using One Fluid Formulation in OpenFOAM

Authors: Phanindra Prasad Thummala, Umran Tezcan Un, Ahmet Ozan Celik

Abstract:

Removal of CO₂ by MEA (monoethanolamine) in structured packing columns depends highly on the gas-liquid interfacial area and film thickness (liquid load). CFD (computational fluid dynamics) is used to find the interfacial area, film thickness and their impact on mass transfer in gas-liquid flow effectively in any column geometry. In general modeling approaches used in CFD derive mass transfer parameters from standard correlations based on penetration or surface renewal theories. In order to avoid the effect of assumptions involved in deriving the correlations and model the mass transfer based solely on fluid properties, state of art approaches like one fluid formulation is useful. In this work, the one fluid formulation was implemented and evaluated for modeling the physical mass transfer of CO₂ by N₂O analogy in OpenFOAM CFD software. N₂O analogy avoids the effect of chemical reactions on absorption and allows studying the amount of CO₂ physical mass transfer possible in a given geometry. The computational domain in the current study was a flat plate with gas and liquid flowing in the countercurrent direction. The effect of operating parameters such as flow rate, the concentration of MEA and angle of inclination on the physical mass transfer is studied in detail. Liquid side mass transfer coefficients obtained by simulations are compared to the correlations available in the literature and it was found that the one fluid formulation was effectively capturing the effects of interface surface instabilities on mass transfer coefficient with higher accuracy. The high mesh refinement near the interface region was found as a limiting reason for utilizing this approach on large-scale simulations. Overall, the one fluid formulation is found more promising for CFD studies involving the CO₂ mass transfer.

Keywords: one fluid formulation, CO₂ absorption, liquid mass transfer coefficient, OpenFOAM, N₂O analogy

Procedia PDF Downloads 195
48 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 366
47 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading

Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera

Abstract:

For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.

Keywords: blast phenomenon, experimental methods, material models, numerical methods

Procedia PDF Downloads 132
46 Paradigm Shift of the World Is Globalization: Identity Crisis, Violence and Cultural War

Authors: Shahla Bukhtair

Abstract:

A paradigm presents a consensus view of a particular or collective community, accepted into by the members of that community, either consciously pronounced or, more likely, simply assumed and not intentionally acknowledged but is articulated. Paradigm shift is based on the behavioral attitude of the community. Change is inexorable. The world is suffering with the innovative creation of globalization. Media boosted this paradigm shift all over the world. Globalization is a vigorous process which impacts differentially on various cultures around the world. The outcome of the globalization is permeates cultural boundaries and in the process results in the spread of Western ideologies and values across the world. The term flourished in 20th century. Globalization is regarded as having substantial impact on such crises through its encouragement of conflicts rather than conciliation; through opportunities of expression, various groups get benefit with it. Identity crisis refers to inflexible mechanism i.e. cultural and political conflicts among polarized groups, which struggle with each other over the definition of a national identity. Violence is not only a kind of physical but it also psychological as well. Due to identity crisis, a person is having an issue of fear, anxiety, and lack of security. Everything has negative and positive aspects. Newspaper columns, magazine articles, films, made-for-TV movies, television special reports, and talk shows are all public arenas where images of political agenda of their own interest are constructed, debated, and reproduced. From these resources, individuals construct their own conceptions of what is normal and acceptable. This bias affects images in the media, and in turn has a negative effect on public development in a society. This paper investigates the relationship between globalization and cultural war, identity crisis and the role of violence. Objectives: - To determine which type of media plays an important role in shaping perceptions and attitudes of public negatively; - To analyze the impact of globalization on identity crisis, violence and global culture (positive and negative).

Keywords: paradigm shift, globalization, identity crisis, cultural war

Procedia PDF Downloads 331
45 Non-Linear Static Analysis of Screwed Moment Connections in Cold-Formed Steel Frames

Authors: Jikhil Joseph, Satish Kumar S R.

Abstract:

Cold-formed steel frames are preferable for framed constructions due to its low seismic weights and results into low seismic forces, but on the contrary, significant lateral deflections are expected under seismic/wind loading. The various factors affecting the lateral stiffness of steel frames are the stiffness of connections, beams and columns. So, by increasing the stiffness of beam, column and making the connections rigid will enhance the lateral stiffness. The present study focused on Structural elements made of rectangular hollow sections and fastened with screwed in-plane moment connections for the building frames. The self-drilling screws can be easily drilled on either side of the connection area with the help of gusset plates. The strength of screwed connections can be made 1.2 times the connecting elements. However, achieving high stiffness in connections is also a challenging job. Hence in addition to beam and column stiffness’s the connection stiffness are also going to be a governing parameter in the lateral deflections of the frames. SAP 2000 Non-linear static analysis has been planned to study the seismic behavior of steel frames. The SAP model will be consisting of nonlinear spring model for the connection to account the semi-rigid connections and the nonlinear hinges will be assigned for beam and column sections according to FEMA 273 guidelines. The reliable spring and hinge parameters will be assigned based on an experimental and analytical database. The non-linear static analysis is mainly focused on the identification of various hinge formations and the estimation of lateral deflection and these will contribute as an inputs for the direct displacement-based Seismic design. The research output from this study are the modelling techniques and suitable design guidelines for the performance-based seismic design of cold-formed steel frames.

Keywords: buckling, cold formed steel, nonlinear static analysis, screwed connections

Procedia PDF Downloads 143
44 An Exploratory Study of E-Learning Stakeholders’ Experiences of Developing, Implementing and Enhancing E-Courses in One Saudi University

Authors: Zahra Alqahtani

Abstract:

The use of e-learning technologies is gaining momentum in all educational institutions of the world, including Saudi universities. In the e-learning context, there is a growing need and concern among Saudi universities to improve and enhance quality assurance for e-learning systems. Practicing quality assurance activities and applying quality standards in e-learning in Saudi universities is thought to reduce the negative viewpoints of some stakeholders and ensure stakeholders’ satisfaction and needs. As a contribution to improving the quality of e-learning method in Saudi universities, the main purpose of this study is to explore and investigate strategies for the development of quality assurance in e-learning in one university in Saudi Arabia, which is considered a good reference university using the best and ongoing practices in e-learning systems among Saudi universities. In order to ensure the quality of its e-learning methods, Saudi university has adopted Quality Matters Standards as a controlling guide for the quality of its blended and full e-course electronic courses. Furthermore, quality assurance can be further improved if a variety of perspectives are taken into consideration from the comprehensive viewpoints of faculty members, administrative staff, and students.This qualitative research involved the use of different types of interviews, as well as documents that contain data related to e-learning methods in the Saudi university environment. This exploratory case study was undertaken, from the perspectives of various participants, to understand the phenomenon of quality assurance using an inductive technique.The results revealed six main supportive factors that assist in ensuring the quality of e-learning in the Saudi university environment. Essentially, these factors are institutional support, faculty member support, evaluation of faculty, quality of e-course design, technology support, and student support, which together have a remarkable positive effect on quality, forming intrinsic columns connected by bricks leading to quality e-learning. Quality Matters standards are considered to have a strong impact on improving faculty members' skills and on the development of high-quality blended and full e-courses.

Keywords: E-learning, quality assurance, quality matters standards, KKU-supportive factors

Procedia PDF Downloads 88
43 Modelling and Simulation of Natural Gas-Fired Power Plant Integrated to a CO2 Capture Plant

Authors: Ebuwa Osagie, Chet Biliyok, Yeung Hoi

Abstract:

Regeneration energy requirement and ways to reduce it is the main aim of most CO2 capture researches currently being performed and thus, post-combustion carbon capture (PCC) option is identified to be the most suitable for the natural gas-fired power plants. From current research and development (R&D) activities worldwide, two main areas are being examined in order to reduce the regeneration energy requirement of amine-based PCC, namely: (a) development of new solvents with better overall performance than 30wt% monoethanolamine (MEA) aqueous solution, which is considered as the base-line solvent for solvent-based PCC, (b) Integration of the PCC Plant to the power plant. In scaling-up a PCC pilot plant to the size required for a commercial-scale natural gas-fired power plant, process modelling and simulation is very essential. In this work, an integrated process made up of a 482MWe natural gas-fired power plant, an MEA-based PCC plant which is developed and validated has been modelled and simulated. The PCC plant has four absorber columns and a single stripper column, the modelling and simulation was performed with Aspen Plus® V8.4. The gas turbine, the heat recovery steam generator and the steam cycle were modelled based on a 2010 US DOE report, while the MEA-based PCC plant was modelled as a rate-based process. The scaling of the amine plant was performed using a rate based calculation in preference to the equilibrium based approach for 90% CO2 capture. The power plant was integrated to the PCC plant in three ways: (i) flue gas stream from the power plant which is divided equally into four stream and each stream is fed into one of the four absorbers in the PCC plant. (ii) Steam draw-off from the IP/LP cross-over pipe in the steam cycle of the power plant used to regenerate solvent in the reboiler. (iii) Condensate returns from the reboiler to the power plant. The integration of a PCC plant to the NGCC plant resulted in a reduction of the power plant output by 73.56 MWe and the net efficiency of the integrated system is reduced by 7.3 % point efficiency. A secondary aim of this study is the parametric studies which have been performed to assess the impacts of natural gas on the overall performance of the integrated process and this is achieved through investigation of the capture efficiencies.

Keywords: natural gas-fired, power plant, MEA, CO2 capture, modelling, simulation

Procedia PDF Downloads 405
42 Diagnosis of the Hydrological and Hydrogeological Potential in the Mancomojan Basin for Estimations of Offer and Demand

Authors: J. M. Alzate, J. Baena

Abstract:

This work presents the final results of the ‘Diagnosis of the hydrological and hydrogeological potential in the Mancomojan basin for estimations of offer and demand’ with the purpose of obtaining solutions of domestic supply for the communities of the zone of study. There was realized the projection of population of the paths by three different scenes. The highest water total demand appears with the considerations of the scene 3, with a total demand for the year 2050 of 59.275 m3/year (1,88 l/s), being the path San Francisco the one that exercises a major pressure on the resource with a demand for the same year of the order of 31.189 m3/year (0,99 l/s). As for the hydrogeological potential of the zone and as alternative of supply of the studied communities, the stratigraphic columns obtained of the geophysical polls do not show strata saturated with water that could be considered to be a potential source of supply for the communities. The water registered in the geophysics tests presents very low resistances what indicates that he presents ions, this water meets in the rock interstices very thin granulometries which indicates that it is a water of constitution, and the flow of this one towards more permeable granulometries is void or limited. The underground resource that is registered so much in electrical vertical polls (SEV) as in tomography and that is saturating rocks of thin granulometry (clays and slimes), was demonstrated by content of ions, which is consistent with the abundant presence of plaster and the genesis marinades with transition to continental of the geological units in the zone. Predominant rocks are sedimentary, sandy rocks of grain I die principally, in minor proportion were observed also sandstones of thick grain to conglomerate with clastic rock of quartz, chert and siltstone of the Formation Mess and sandstones (of thin, average and thick grain) alternating with caps conglomerate whose thickness is, in general, between 5 and 15 cm, the nodules of sandstones are frequent with the same composition of the sandstones that contain them, in some cases with calcareous and crossed stratification of the formation Sincelejo Miembro Morroa.

Keywords: hydrological, hydrogeological potential, geotomography, vertical electrical sounding (VES)

Procedia PDF Downloads 231
41 Effect of Dynamic Loading by Cyclic Triaxial Tests on Sand Stabilized with Cement

Authors: Priyanka Devi, Mohammad Muzzaffar Khan, G. Kalyan Kumar

Abstract:

Liquefaction of saturated soils due to dynamic loading is an important and interesting area in the field of geotechnical earthquake engineering. When the soil liquefies, the structures built on it develops uneven settlements thereby producing cracks in the structure and weakening the foundation. The 1964 Alaskan Good Friday earthquake, the 1989 San Francisco earthquake and 2011 Tōhoku earthquake are some of the examples of liquefaction occurred due to an earthquake. To mitigate the effect of liquefaction, several methods such use of stone columns, increasing the vertical stress, compaction and removal of liquefiable soil are practiced. Grouting is one of those methods used to increase the strength of the foundation and develop resistance to liquefaction of soil without affecting the superstructure. In the present study, an attempt has been made to investigate the undrained cyclic behavior of locally available soil, stabilized by cement to mitigate the seismically induced soil liquefaction. The specimens of 75mm diameter and 150mm height were reconstituted in the laboratory using water sedimentation technique. A series of strain-controlled cyclic triaxial tests were performed on saturated soil samples followed by consolidation. The effects of amplitude, confining pressure and relative density on the dynamic behavior of sand was studied for soil samples with varying cement content. The results obtained from the present study on loose specimens and medium dense specimens indicate that (i) the higher the relative density, the more will be the liquefaction resistance, (ii) with increase of effective confining pressure, a decrease in developing of excess pore water pressure during cyclic loading was observed and (iii) sand specimens treated with cement showed reduced excess pore pressures and increased liquefaction resistance suggesting it as one of the mitigation methods.

Keywords: cyclic triaxial test, liquefaction, soil-cement stabilization, pore pressure ratio

Procedia PDF Downloads 271
40 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 139
39 A Sustainable and Low-Cost Filter to Treat Pesticides in Water

Authors: T. Abbas, J. McEvoy, E. Khan

Abstract:

Pesticide contamination in water supply is a common environmental problem in rural agricultural communities. Advanced water treatment processes such as membrane filtration and adsorption on activated carbon only remove pesticides from water without degrading them into less toxic/easily degradable compounds leaving behind contaminated brine and activated carbon that need to be managed. Rural communities which normally cannot afford expensive water treatment technologies need an economical and sustainable filter which not only treats pesticides from water but also degrades them into benign products. In this study, iron turning waste experimented as potential point-of-use filtration media for the removal/degradation of a mixture of six chlorinated pesticides (lindane, heptachlor, endosulfan, dieldrin, endrin, and DDT) in water. As a common and traditional medium for water filtration, sand was also tested along with iron turning waste. Iron turning waste was characterized using scanning electron microscopy and energy dispersive X-Ray analyzer. Four glass columns with different filter media layer configurations were set up: (1) only sand, (2) only iron turning, (3) sand and iron turning (two separate layers), and (4) sand, iron turning and sand (three separate layers). The initial pesticide concentration and flow rate were 2 μg/L and 10 mL/min. Results indicate that sand filtration was effective only for the removal of DDT (100%) and endosulfan (94-96%). Iron turning filtration column effectively removed endosulfan, endrin, and dieldrin (85-95%) whereas the lindane and DDT removal were 79-85% and 39-56%, respectively. The removal efficiencies for heptachlor, endosulfan, endrin, dieldrin, and DDT were 90-100% when sand and iron turning waste (two separate layers) were used. However, better removal efficiencies (93-100%) for five out of six pesticides were achieved, when sand, iron turning and sand (three separate layers) were used as filtration media. Moreover, the effects of water pH, amounts of media, and minerals present in water such as magnesium, sodium, calcium, and nitrate on the removal of pesticides were examined. Results demonstrate that iron turning waste efficiently removed all the pesticides under studied parameters. Also, it completely de-chlorinated all the pesticides studied and based on the detection of by-products, the degradation mechanisms for all six pesticides were proposed.

Keywords: pesticide contamination, rural communities, iron turning waste, filtration

Procedia PDF Downloads 222
38 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children

Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix

Abstract:

Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.

Keywords: vision, attention, oculomotor processes, reading, preschoolers

Procedia PDF Downloads 120
37 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection

Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye

Abstract:

Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.

Keywords: connected-component, projection-profile, segmentation, text-line

Procedia PDF Downloads 93
36 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study

Authors: Vincenzo Piemonte

Abstract:

The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.

Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling

Procedia PDF Downloads 234
35 Self-Energy Sufficiency Assessment of the Biorefinery Annexed to a Typical South African Sugar Mill

Authors: M. Ali Mandegari, S. Farzad, , J. F. Görgens

Abstract:

Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation biorefinery is defined as a process to use waste fibrous for the production of biofuel, chemicals animal food, and electricity. Bioethanol is by far the most widely used biofuel for transportation worldwide and many challenges in front of bioethanol production were solved. Biorefinery annexed to the existing sugar mill for production of bioethanol and electricity is proposed to sugar industry and is addressed in this study. Since flowsheet development is the key element of the bioethanol process, in this work, a biorefinery (bioethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behaviour of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bioethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive biorefinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bioethanol purification was simulated by two distillation columns with side stream and fuel grade bioethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates that the annexed biorefinery can be self-energy sufficient when 35% of feedstock (tops/trash) bypass the biorefinery process and directly be loaded to the boiler to produce sufficient steam and power for sugar mill and biorefinery plant.

Keywords: biorefinery, self-energy sufficiency, tops/trash, bioethanol, electricity

Procedia PDF Downloads 515
34 Flow Sheet Development and Simulation of a Bio-refinery Annexed to Typical South African Sugar Mill

Authors: M. Ali Mandegari, S. Farzad, J. F. Görgens

Abstract:

Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation bio-refinery is defined as a process to use waste fibrous for the production of bio-fuel, chemicals animal food, and electricity. Bio-ethanol is by far the most widely used bio-fuel for transportation worldwide and many challenges in front of bio-ethanol production were solved. Bio-refinery annexed to the existing sugar mill for production of bio-ethanol and electricity is proposed to sugar industry and is addressed in this study. Since flow-sheet development is the key element of the bio-ethanol process, in this work, a bio-refinery (bio-ethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behavior of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bio-ethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive bio-refinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bio-ethanol purification was simulated by two distillation columns with side stream and fuel grade bio-ethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates 256.6 kg bio ethanol per ton of feedstock and 31 MW surplus power were attained from bio-refinery while the process consumes 3.5, 3.38, and 0.164 (GJ/ton per ton of feedstock) hot utility, cold utility and electricity respectively. Developed simulation is a threshold of variety analyses and developments for further studies.

Keywords: bio-refinery, bagasse, tops, trash, bio-ethanol, electricity

Procedia PDF Downloads 495
33 Engineering Analysis for Fire Safety Using Computational Fluid Dynamic (CFD)

Authors: Munirajulu M, Srikanth Modem

Abstract:

A large cricket stadium with the capacity to accommodate several thousands of spectators has the seating arena consisting of a two-tier arrangement with an upper and a lower bowl and an intermediate concourse podium level for pedestrian movement to access the bowls. The uniqueness of the stadium is that spectators can have an unobstructed view from all around the podium towards the field of play. Upper and lower bowls are connected by stairs. The stairs landing is a precast slab supported by cantilevered steel beams. These steel beams are fixed to precast columns supporting the stadium structure. The stair slabs are precast concrete supported on a landing slab and cantilevered steel beams. During an event of a fire at podium level between two staircases, fire resistance of steel beams is very critical to life safety. If the steel beam loses its strength due to lack of fire resistance, it will be weak in supporting stair slabs and may lead to a hazard in evacuating occupants from the upper bowl to the lower bowl. In this study, to ascertain fire rating and life safety, a performance-based design using CFD analysis is used to evaluate the steel beams' fire resistance. A fire size of 3.5 MW (convective heat output of fire) with a wind speed of 2.57 m/s is considered for fire and smoke simulation. CFD results show that the smoke temperature near the staircase/ around the staircase does not exceed 1500 C for the fire duration considered. The surface temperature of cantilevered steel beams is found to be less than or equal to 1500 C. Since this temperature is much less than the critical failure temperature of steel (5200 C), it is concluded that the design of structural steel supports on the staircase is adequate and does not need additional fire protection such as fire-resistant coating. CFD analysis provided an engineering basis for the performance-based design of steel structural elements and an opportunity to optimize fire protection requirements. Thus, performance-based design using CFD modeling and simulation of fire and smoke is an innovative way to evaluate fire rating requirements, ascertain life safety and optimize the design with regard to fire protection on structural steel elements.

Keywords: fire resistance, life safety, performance-based design, CFD analysis

Procedia PDF Downloads 162
32 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites

Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar

Abstract:

In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.

Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption

Procedia PDF Downloads 153
31 Isolation, Characterization, and Antibacterial Evaluation of Antimicrobial Peptides and Derivatives from Fly Larvae Sarconesiopsis magellanica (Diptera: Calliphoridae)

Authors: A. Díaz-Roa, P. I. Silva Junior, F. J. Bello

Abstract:

Sarconesiopsis magellanica (Diptera: Calliphoridae) is a medically important necrophagous fly which is used for establishing the post-mortem interval. Dipterous maggots release diverse proteins and peptides contained in larval excretion and secretion (ES) products playing a key role in digestion. The most important mechanism for combating infection using larval therapy depends on larval ES. These larvae are protected against infection by a diverse spectrum of antimicrobial peptides (AMPs), one already known like lucifensin. Special interest in these peptides has also been aroused regarding understanding their role in wound healing since they degrade necrotic tissue and kill different bacteria during larval therapy. The action of larvae on wounds occurs through 3 mechanisms of action: removal of necrotic tissue, stimulation of granulation tissue, and antibacterial action of larval ES. Some components of the ES include calcium, urea, allantoin ammonium bicarbonate and reducing the viability of Gram positive and Gram negative bacteria. The Lucilia sericata fly larvae have been the most used, however, we need to evaluate new species that could potentially be similar or more effective than fly above. This study was thus aimed at identifying and characterizing S. magellanica AMPs contained in ES products for the first time and compared them with the common fly used L. sericata. These products were obtained from third-instar larvae taken from a previously established colony. For the first analysis, ES fractions were separate by Sep-Pak C18 disposable columns (first step). The material obtained was fractionated by RP-HPLC by using Júpiter C18 semi-preparative column. The products were then lyophilized and their antimicrobial activity was characterized by incubation with different bacterial strains. The first chromatographic analysis of ES from L. sericata gives 6 fractions with antimicrobial activity against Gram-positive bacteria Micrococus luteus, and 3 fractions with activity against Gram-negative bacteria Pseudomonae aeruginosa while the one from S. magellanica gaves 1 fraction against M. luteus and 4 against P. aeruginosa. Maybe one of these fractions could correspond to the peptide already known from L. sericata. These results show the first work for supporting further experiments aimed at validating S. magellanica use in larval therapy. We still need to search if we find some new molecules, by making mass spectrometry and ‘de novo sequencing’. Further studies are necessary to identify and characterize them to better understand their functioning.

Keywords: antimicrobial peptides, larval therapy, Lucilia sericata, Sarconesiopsis magellanica

Procedia PDF Downloads 340
30 Investigation on Behaviour of Reinforced Concrete Beam-Column Joints Retrofitted with CFRP

Authors: Ehsan Mohseni

Abstract:

The aim of this thesis is to provide numerical analyses of reinforced concrete beams-column joints with/without CFRP (Carbon Fiber Reinforced Polymer) in order to achieve a better understanding of the behaviour of strengthened beamcolumn joints. A comprehensive literature survey prior to this study revealed that published studies are limited to a handful only; the results are inconclusive and some are even contradictory. Therefore in order to improve on this situation, following that review, a numerical study was designed and performed as presented in this thesis. For the numerical study, dimensions, end supports, and characteristics of the beam and column models were the same as those chosen in an experimental investigation performed previously where ten beamcolumn joint were tested tofailure. Finite element analysis is a useful tool in cases where analytical methods are not capable of solving the problem due to the complexities associated with the problem. The cyclic behaviour of FRP strengthened reinforced concrete beam-columns joints is such a case. Interaction of steel (longitudinal and stirrups), concrete and FRP, yielding of steel bars and stirrups, cracking of concrete, the redistribution of stresses as some elements unload due to crushing or yielding and the confinement of concrete due to the presence of FRP are some of the issues that introduce the complexities into the problem.Numerical solutions, however, can provide further in formation about the behaviour in lieu of the costly experiments or complex closed form solutions. This thesis presents the results of a numerical study on beam-column joints subjected to cyclic loads that are strengthened with CFRP wraps or strrips in a variety of configurations. The analyses are performed by Abaqus finite element program and are calibrated with the experiments. A range of issues in beam-column joints including the cracking load, the ultimate load, lateral load-displacement curves of joints, are investigated.The numerical results for different configurations of strengthening are compared. Finally, the computed numerical results are compared with those obtained from experiments. the cracking load, the ultimate load, lateral load-displacement curves obtained from numerical analysis for all joints were in very good agreement with the corresponding experimental ones.The results obtained from the numerical analysis in most cases implies that this method is conservative and therefore can be used in design applications with confidence.

Keywords: numerical analysis, strengthening, CFRP, reinforced concrete joints

Procedia PDF Downloads 322
29 Building Data Infrastructure for Public Use and Informed Decision Making in Developing Countries-Nigeria

Authors: Busayo Fashoto, Abdulhakeem Shaibu, Justice Agbadu, Samuel Aiyeoribe

Abstract:

Data has gone from just rows and columns to being an infrastructure itself. The traditional medium of data infrastructure has been managed by individuals in different industries and saved on personal work tools; one of such is the laptop. This hinders data sharing and Sustainable Development Goal (SDG) 9 for infrastructure sustainability across all countries and regions. However, there has been a constant demand for data across different agencies and ministries by investors and decision-makers. The rapid development and adoption of open-source technologies that promote the collection and processing of data in new ways and in ever-increasing volumes are creating new data infrastructure in sectors such as lands and health, among others. This paper examines the process of developing data infrastructure and, by extension, a data portal to provide baseline data for sustainable development and decision making in Nigeria. This paper employs the FAIR principle (Findable, Accessible, Interoperable, and Reusable) of data management using open-source technology tools to develop data portals for public use. eHealth Africa, an organization that uses technology to drive public health interventions in Nigeria, developed a data portal which is a typical data infrastructure that serves as a repository for various datasets on administrative boundaries, points of interest, settlements, social infrastructure, amenities, and others. This portal makes it possible for users to have access to datasets of interest at any point in time at no cost. A skeletal infrastructure of this data portal encompasses the use of open-source technology such as Postgres database, GeoServer, GeoNetwork, and CKan. These tools made the infrastructure sustainable, thus promoting the achievement of SDG 9 (Industries, Innovation, and Infrastructure). As of 6th August 2021, a wider cross-section of 8192 users had been created, 2262 datasets had been downloaded, and 817 maps had been created from the platform. This paper shows the use of rapid development and adoption of technologies that facilitates data collection, processing, and publishing in new ways and in ever-increasing volumes. In addition, the paper is explicit on new data infrastructure in sectors such as health, social amenities, and agriculture. Furthermore, this paper reveals the importance of cross-sectional data infrastructures for planning and decision making, which in turn can form a central data repository for sustainable development across developing countries.

Keywords: data portal, data infrastructure, open source, sustainability

Procedia PDF Downloads 63
28 Use and Effects of Kanban Board from the Aspects of Brothers Furniture Limited

Authors: Kazi Rizvan, Yamin Rekhu

Abstract:

Due to high competitiveness in industries throughout the world, every industry is trying hard to utilize all their resources to keep their productivity as high as possible. Many tools have been being used to ensure smoother flow of an operation, to balance tasks, to maintain proper schedules for tasks, to maintain proper sequence for tasks, to reduce unproductive time. All of these tools are used to augment productivity within an industry. Kanban board is one of them and of the many important tools of lean production system. Kanban Board is a visual depiction of the status of tasks. Kanban board shows the actual status of the tasks. It conveys the progress and issues of tasks as well. Using Kanban Board, tasks can be distributed among workers and operation targets can be visually represented to them. In this paper, an example of Kanban board from the aspects of Brothers Furniture Limited was taken and how the Kanban board system was implemented, how the board was designed and how it was made easily perceivable for the less literate or illiterate workers. The Kanban board was designed for the packing section of Brothers Furniture Limited. It was implemented for the purpose of representing the tasks flow to the workers and to mitigate the time that was wasted while the workers remained wondering about what task they should start after they finish one. Kanban board subsumed seven columns and there was a column for comments where if any problem occurred during working on the tasks. Kanban board was helpful for the workers as the board showed the urgency of the tasks. It was also helpful for the store section as they could understand which products and how much of them could be delivered to store at any certain time. Kanban board had all the information centralized which is why the work-flow got paced up and idle time was minimized. Regardless of many workers being illiterate or less literate, Kanban board was still explicable for the workers as the Kanban cards were colored. Since the significance of colors can be conveniently interpretable to them, colored cards helped a great deal in that matter. Hence, the illiterate or less literate workers didn’t have to spend time wondering about the significance of the cards. Even when the workers weren’t told the significance of the colored cards, they could grow a feeling about their meaning as colors can trigger anyone’s mind to perceive the situation. As a result, the board elucidated the workers about what board required them to do, when to do and what to do next. Kanban board alleviated excessive time between tasks by setting day-plan for targeted tasks and it also reduced time during tasks as the workers were acknowledged of forthcoming tasks for a day. Being very specific to the tasks, Kanban board helped the workers become more focused on their tasks helped them do their job with more perfection. As a result, The Kanban board helped achieve a 8.75% increase in productivity than the productivity before the Kanban board was implemented.

Keywords: color, Kanban Board, Lean Tool, literacy, packing, productivity

Procedia PDF Downloads 209