Search results for: large scale calorimeter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11470

Search results for: large scale calorimeter

11110 Scaling Strategy of a New Experimental Rig for Wheel-Rail Contact

Authors: Meysam Naeimi, Zili Li, Rolf Dollevoet

Abstract:

A new small–scale test rig developed for rolling contact fatigue (RCF) investigations in wheel–rail material. This paper presents the scaling strategy of the rig based on dimensional analysis and mechanical modelling. The new experimental rig is indeed a spinning frame structure with multiple wheel components over a fixed rail-track ring, capable of simulating continuous wheel-rail contact in a laboratory scale. This paper describes the dimensional design of the rig, to derive its overall scaling strategy and to determine the key elements’ specifications. Finite element (FE) modelling is used to simulate the mechanical behavior of the rig with two sample scale factors of 1/5 and 1/7. The results of FE models are compared with the actual railway system to observe the effectiveness of the chosen scales. The mechanical properties of the components and variables of the system are finally determined through the design process.

Keywords: new test rig, rolling contact fatigue, rail, small scale

Procedia PDF Downloads 469
11109 Uses for Closed Coal Mines: Construction of Underground Pumped Storage Hydropower Plants

Authors: Javier Menéndez, Jorge Loredo

Abstract:

Large scale energy storage systems (LSESS) such as pumped-storage hydro-power (PSH) are required in the current energy transition towards a low carbon economy by using green energies that produce low levels of greenhouse gas (GHG) emissions. Coal mines are currently being closed in the European Union and their underground facilities may be used to build PSH plants. However, the development of this projects requires the excavation of a network of tunnels and a large cavern that would be used as a powerhouse to install the Francis turbine and motor-generator. The technical feasibility to excavate the powerhouse cavern has been analyzed in the North of Spain. Three-dimensional numerical models have been conducted to analyze the stability considering shale and sandstone rock mass. Total displacements and thickness of plastic zones were examined considering different support systems. Systematic grouted rock bolts and fibre reinforced shotcrete were applied at the cavern walls and roof. The results obtained show that the construction of the powerhouse is feasible applying proper support systems.

Keywords: closed mines, mine water, numerical model, pumped-storage, renewable energies

Procedia PDF Downloads 89
11108 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 130
11107 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 224
11106 Superconductor-Insulator Transition in Disordered Spin-1/2 Systems

Authors: E. Cuevas, M. Feigel'man, L. Ioffe, M. Mezard

Abstract:

The origin of continuous energy spectrum in large disordered interacting quantum systems is one of the key unsolved problems in quantum physics. While small quantum systems with discrete energy levels are noiseless and stay coherent forever in the absence of any coupling to external world, most large-scale quantum systems are able to produce thermal bath, thermal transport and excitation decay. This intrinsic decoherence is manifested by a broadening of energy levels which acquire a finite width. The important question is: What is the driving force and mechanism of transition(s) between two different types of many-body systems - with and without decoherence and thermal transport? Here, we address this question via two complementary approaches applied to the same model of quantum spin-1/2 system with XY-type exchange interaction and random transverse field. Namely, we develop analytical theory for this spin model on a Bethe lattice and implement numerical study of exact level statistics for the same spin model on random graph. This spin model is relevant to the study of pseudogaped superconductivity and S-I transition in some amorphous materials.

Keywords: strongly correlated electrons, quantum phase transitions, superconductor, insulator

Procedia PDF Downloads 575
11105 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent

Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi

Abstract:

An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.

Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration

Procedia PDF Downloads 466
11104 Failure Simulation of Small-scale Walls with Chases Using the Lattic Discrete Element Method

Authors: Karina C. Azzolin, Luis E. Kosteski, Alisson S. Milani, Raquel C. Zydeck

Abstract:

This work aims to represent Numerically tests experimentally developed in reduced scale walls with horizontal and inclined cuts by using the Lattice Discrete Element Method (LDEM) implemented On de Abaqus/explicit environment. The cuts were performed with depths of 20%, 30%, and 50% On the walls subjected to centered and eccentric loading. The parameters used to evaluate the numerical model are its strength, the failure mode, and the in-plane and out-of-plane displacements.

Keywords: structural masonry, wall chases, small scale, numerical model, lattice discrete element method

Procedia PDF Downloads 171
11103 Precise CNC Machine for Multi-Tasking

Authors: Haroon Jan Khan, Xian-Feng Xu, Syed Nasir Shah, Anooshay Niazi

Abstract:

CNC machines are not only used on a large scale but also now become a prominent necessity among households and smaller businesses. Printed Circuit Boards manufactured by the chemical process are not only risky and unsafe but also expensive and time-consuming. A 3-axis precise CNC machine has been developed, which not only fabricates PCB but has also been used for multi-tasks just by changing the materials used and tools, making it versatile. The advanced CNC machine takes data from CAM software. The TB-6560 controller is used in the CNC machine to adjust variation in the X, Y, and Z axes. The advanced machine is efficient in automatic drilling, engraving, and cutting.

Keywords: CNC, G-code, CAD, CAM, Proteus, FLATCAM, Easel

Procedia PDF Downloads 150
11102 Managing HR Knowledge in a Large Privately Owned Enterprise: An Empirical Case Analysis

Authors: Cindy Wang-Cowham, Judy Ningyu Tang

Abstract:

The paper contributes towards the development of scarce literature on HR knowledge management. Drawing literature from knowledge management, the authors define the meaning of HR knowledge and propose that there are social mechanisms in organizations that facilitate the management and sharing of HR knowledge. Instead of investigating the subject in large multinational corporations, the present paper examines it in a large Chinese privately owned enterprise, which has an international standing. The main finding of the case analysis is that communication and feedback plays a pivotal role when managing HR knowledge. Social mechanisms can stimulate the communication and feedback between employees, thus facilitate knowledge exchange.

Keywords: HR knowledge, knowledge management, large privately owned enterprises, China

Procedia PDF Downloads 522
11101 Flood Monitoring in the Vietnamese Mekong Delta Using Sentinel-1 SAR with Global Flood Mapper

Authors: Ahmed S. Afifi, Ahmed Magdy

Abstract:

Satellite monitoring is an essential tool to study, understand, and map large-scale environmental changes that affect humans, climate, and biodiversity. The Sentinel-1 Synthetic Aperture Radar (SAR) instrument provides a high collection of data in all-weather, short revisit time, and high spatial resolution that can be used effectively in flood management. Floods occur when an overflow of water submerges dry land that requires to be distinguished from flooded areas. In this study, we use global flood mapper (GFM), a new google earth engine application that allows users to quickly map floods using Sentinel-1 SAR. The GFM enables the users to adjust manually the flood map parameters, e.g., the threshold for Z-value for VV and VH bands and the elevation and slope mask threshold. The composite R:G:B image results by coupling the bands of Sentinel-1 (VH:VV:VH) reduces false classification to a large extent compared to using one separate band (e.g., VH polarization band). The flood mapping algorithm in the GFM and the Otsu thresholding are compared with Sentinel-2 optical data. And the results show that the GFM algorithm can overcome the misclassification of a flooded area in An Giang, Vietnam.

Keywords: SAR backscattering, Sentinel-1, flood mapping, disaster

Procedia PDF Downloads 97
11100 Towards a Large Scale Deep Semantically Analyzed Corpus for Arabic: Annotation and Evaluation

Authors: S. Alansary, M. Nagi

Abstract:

This paper presents an approach of conducting semantic annotation of Arabic corpus using the Universal Networking Language (UNL) framework. UNL is intended to be a promising strategy for providing a large collection of semantically annotated texts with formal, deep semantics rather than shallow. The result would constitute a semantic resource (semantic graphs) that is editable and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles and rhetorical relations, into a single semantic formalism for knowledge representation. The paper will also present the Interactive Analysis​ tool for automatic semantic annotation (IAN). In addition, the cornerstone of the proposed methodology which are the disambiguation and transformation rules, will be presented. Semantic annotation using UNL has been applied to a corpus of 20,000 Arabic sentences representing the most frequent structures in the Arabic Wikipedia. The representation, at different linguistic levels was illustrated starting from the morphological level passing through the syntactic level till the semantic representation is reached. The output has been evaluated using the F-measure. It is 90% accurate. This demonstrates how powerful the formal environment is, as it enables intelligent text processing and search.

Keywords: semantic analysis, semantic annotation, Arabic, universal networking language

Procedia PDF Downloads 578
11099 Triggering Supersonic Boundary-Layer Instability by Small-Scale Vortex Shedding

Authors: Guohua Tu, Zhi Fu, Zhiwei Hu, Neil D Sandham, Jianqiang Chen

Abstract:

Tripping of boundary-layers from laminar to turbulent flow, which may be necessary in specific practical applications, requires high amplitude disturbances to be introduced into the boundary layers without large drag penalties. As a possible improvement on fixed trip devices, a technique based on vortex shedding for enhancing supersonic flow transition is demonstrated in the present paper for a Mach 1.5 boundary layer. The compressible Navier-Stokes equations are solved directly using a high-order (fifth-order in space and third-order in time) finite difference method for small-scale cylinders suspended transversely near the wall. For cylinders with proper diameter and mount location, asymmetry vortices shed within the boundary layer are capable of tripping laminar-turbulent transition. Full three-dimensional simulations showed that transition was enhanced. A parametric study of the size and mounting location of the cylinder is carried out to identify the most effective setup. It is also found that the vortex shedding can be suppressed by some factors such as wall effect.

Keywords: boundary layer instability, boundary layer transition, vortex shedding, supersonic flows, flow control

Procedia PDF Downloads 358
11098 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research

Authors: Edvard P. G. Bruun

Abstract:

One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.

Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research

Procedia PDF Downloads 232
11097 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up

Procedia PDF Downloads 314
11096 Pricing Effects on Equitable Distribution of Forest Products and Livelihood Improvement in Nepalese Community Forestry

Authors: Laxuman Thakuri

Abstract:

Despite the large number of in-depth case studies focused on policy analysis, institutional arrangement, and collective action of common property resource management; how the local institutions take the pricing decision of forest products in community forest management and what kinds of effects produce it, the answers of these questions are largely silent among the policy-makers and researchers alike. The study examined how the local institutions take the pricing decision of forest products in the lowland community forestry of Nepal and how the decisions affect to equitable distribution of benefits and livelihood improvement which are also objectives of Nepalese community forestry. The study assumes that forest products pricing decisions have multiple effects on equitable distribution and livelihood improvement in the areas having heterogeneous socio-economic conditions. The dissertation was carried out at four community forests of lowland, Nepal that has characteristics of high value species, matured-experience of community forest management and better record-keeping system of forest products production, pricing and distribution. The questionnaire survey, individual to group discussions and direct field observation were applied for data collection from the field, and Lorenz curve, gini-coefficient, χ²-text, and SWOT (Strong, Weak, Opportunity, and Threat) analysis were performed for data analysis and results interpretation. The dissertation demonstrates that the low pricing strategy of high-value forest products was supposed crucial to increase the access of socio-economically weak households, and to and control over the important forest products such as timber, but found counter productive as the strategy increased the access of socio-economically better-off households at higher rate. In addition, the strategy contradicts to collect a large-scale community fund and carry out livelihood improvement activities as per the community forestry objectives. The crucial part of the study is despite the fact of low pricing strategy; the timber alone contributed large part of community fund collection. The results revealed close relation between pricing decisions and livelihood objectives. The action research result shows that positive price discrimination can slightly reduce the prevailing inequality and increase the fund. However, it lacks to harness the full price of forest products and collects a large-scale community fund. For broader outcomes of common property resource management in terms of resource sustainability, equity, and livelihood opportunity, the study suggests local institutions to harness the full price of resource products with respect to the local market.

Keywords: community, equitable, forest, livelihood, socioeconomic, Nepal

Procedia PDF Downloads 530
11095 Multiscale Computational Approach to Enhance the Understanding, Design and Development of CO₂ Catalytic Conversion Technologies

Authors: Agnieszka S. Dzielendziak, Lindsay-Marie Armstrong, Matthew E. Potter, Robert Raja, Pier J. A. Sazio

Abstract:

Reducing carbon dioxide, CO₂, is one of the greatest global challenges. Conversion of CO₂ for utilisation across synthetic fuel, pharmaceutical, and agrochemical industries offers a promising option, yet requires significant research to understanding the complex multiscale processes involved. To experimentally understand and optimize such processes at that catalytic sites and exploring the impact of the process at reactor scale, is too expensive. Computational methods offer significant insight and flexibility but require a more detailed multi-scale approach which is a significant challenge in itself. This work introduces a computational approach which incorporates detailed catalytic models, taken from experimental investigations, into a larger-scale computational flow dynamics framework. The reactor-scale species transport approach is modified near the catalytic walls to determine the influence of catalytic clustering regions. This coupling approach enables more accurate modelling of velocity, pressures, temperatures, species concentrations and near-wall surface characteristics which will ultimately enable the impact of overall reactor design on chemical conversion performance.

Keywords: catalysis, CCU, CO₂, multi-scale model

Procedia PDF Downloads 245
11094 A Polynomial Approach for a Graphical-based Integrated Production and Transport Scheduling with Capacity Restrictions

Authors: M. Ndeley

Abstract:

The performance of global manufacturing supply chains depends on the interaction of production and transport processes. Currently, the scheduling of these processes is done separately without considering mutual requirements, which leads to no optimal solutions. An integrated scheduling of both processes enables the improvement of supply chain performance. The integrated production and transport scheduling problem (PTSP) is NP-hard, so that heuristic methods are necessary to efficiently solve large problem instances as in the case of global manufacturing supply chains. This paper presents a heuristic scheduling approach which handles the integration of flexible production processes with intermodal transport, incorporating flexible land transport. The method is based on a graph that allows a reformulation of the PTSP as a shortest path problem for each job, which can be solved in polynomial time. The proposed method is applied to a supply chain scenario with a manufacturing facility in South Africa and shipments of finished product to customers within the Country. The obtained results show that the approach is suitable for the scheduling of large-scale problems and can be flexibly adapted to different scenarios.

Keywords: production and transport scheduling problem, graph based scheduling, integrated scheduling

Procedia PDF Downloads 468
11093 Lipid Extraction from Microbial Cell by Electroporation Technique and Its Influence on Direct Transesterification for Biodiesel Synthesis

Authors: Abu Yousuf, Maksudur Rahman Khan, Ahasanul Karim, Amirul Islam, Minhaj Uddin Monir, Sharmin Sultana, Domenico Pirozzi

Abstract:

Traditional biodiesel feedstock like edible oils or plant oils, animal fats and cooking waste oil have been replaced by microbial oil in recent research of biodiesel synthesis. The well-known community of microbial oil producers includes microalgae, oleaginous yeast and seaweeds. Conventional transesterification of microbial oil to produce biodiesel is lethargic, energy consuming, cost-ineffective and environmentally unhealthy. This process follows several steps such as microbial biomass drying, cell disruption, oil extraction, solvent recovery, oil separation and transesterification. Therefore, direct transesterification of biodiesel synthesis has been studying for last few years. It combines all the steps in a single reactor and it eliminates the steps of biomass drying, oil extraction and separation from solvent. Apparently, it seems to be cost-effective and faster process but number of difficulties need to be solved to make it large scale applicable. The main challenges are microbial cell disruption in bulk volume and make faster the esterification reaction, because water contents of the medium sluggish the reaction rate. Several methods have been proposed but none of them is up to the level to implement in large scale. It is still a great challenge to extract maximum lipid from microbial cells (yeast, fungi, algae) investing minimum energy. Electroporation technique results a significant increase in cell conductivity and permeability caused due to the application of an external electric field. Electroporation is required to alter the size and structure of the cells to increase their porosity as well as to disrupt the microbial cell walls within few seconds to leak out the intracellular lipid to the solution. Therefore, incorporation of electroporation techniques contributed in direct transesterification of microbial lipids by increasing the efficiency of biodiesel production rate.

Keywords: biodiesel, electroporation, microbial lipids, transesterification

Procedia PDF Downloads 274
11092 Green Prossesing of PS/Nanoparticle Fibers and Studying Morphology and Properties

Authors: M. Kheirandish, S. Borhani

Abstract:

In this experiment Polystyrene/Zinc-oxide (PS/ZnO) nanocomposite fibers were produced by electrospinning technique using limonene as a green solvent. First, the morphology of electrospun pure polystyrene (PS) and PS/ZnO nanocomposite fibers investigated by SEM. Results showed the PS fiber diameter decreased by increasing concentration of Zinc Oxide nanoparticles (ZnO NPs). Thermo Gravimetric Analysis (TGA) results showed thermal stability of nanocomposites increased by increasing ZnO NPs in PS electrospun fibers. Considering Differential Scanning Calorimeter (DSC) thermograms for electrospun PS fibers indicated that introduction of ZnO NPs into fibers affects the glass transition temperature (Tg) by reducing it. Also, UV protection properties of nanocomposite fibers were increased by increasing ZnO concentration. Evaluating the effect of metal oxide NPs amount on mechanical properties of electrospun layer showed that tensile strength and elasticity modulus of the electrospun layer of PS increased by addition of ZnO NPs. X-ray diffraction (XRD) pattern of nanopcomposite fibers confirmed the presence of NPs in the samples.

Keywords: electrospininng, nanoparticle, polystyrene, ZnO

Procedia PDF Downloads 233
11091 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 63
11090 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization

Procedia PDF Downloads 292
11089 Two Efficient Heuristic Algorithms for the Integrated Production Planning and Warehouse Layout Problem

Authors: Mohammad Pourmohammadi Fallah, Maziar Salahi

Abstract:

In the literature, a mixed-integer linear programming model for the integrated production planning and warehouse layout problem is proposed. To solve the model, the authors proposed a Lagrangian relax-and-fix heuristic that takes a significant amount of time to stop with gaps above 5$\%$ for large-scale instances. Here, we present two heuristic algorithms to solve the problem. In the first one, we use a greedy approach by allocating warehouse locations with less reservation costs and also less transportation costs from the production area to locations and from locations to the output point to items with higher demands. Then a smaller model is solved. In the second heuristic, first, we sort items in descending order according to the fraction of the sum of the demands for that item in the time horizon plus the maximum demand for that item in the time horizon and the sum of all its demands in the time horizon. Then we categorize the sorted items into groups of 3, 4, or 5 and solve a small-scale optimization problem for each group, hoping to improve the solution of the first heuristic. Our preliminary numerical results show the effectiveness of the proposed heuristics.

Keywords: capacitated lot-sizing, warehouse layout, mixed-integer linear programming, heuristics algorithm

Procedia PDF Downloads 190
11088 Thermal and Flammability Properties of Paraffin/Nanoclay Composite Phase Change Materials Incorporated in Building Materials for Thermal Energy Storage

Authors: Awni H. Alkhazaleh, Baljinder K. Kandola

Abstract:

In this study, a form-stable composite Paraffin/Nanoclay (PA-NC) has been prepared by absorbing PA into porous particles of NC to be used for low-temperature latent heat thermal energy storage. The leakage test shows that the maximum mass fraction of PA that can be incorporated in NC without leakage is 60 wt.%. Differential scanning calorimetry (DSC) has been used to measure the thermal properties of the PA and PA-NC both before and after incorporation in plasterboard (PL). The mechanical performance of the samples has been evaluated in flexural mode. The thermal energy storage performance has been studied using a small test chamber (100 mm × 100 mm × 100 mm) made from 10 mm thick PL and measuring the temperatures using thermocouples. The flammability of the PL+PL-NC has been discussed using a cone calorimeter. The results indicate that the form composite PA has good potential for use as thermal energy storage materials in building applications.

Keywords: building materials, flammability, phase change materials, thermal energy storage

Procedia PDF Downloads 326
11087 Evaluating the Challenges of Large Scale Urban Redevelopment Projects for Central Government Employee Housing in Delhi

Authors: Parul Kapoor, Dheeraj Bhardwaj

Abstract:

Delhi and other Indian cities accommodate thousands of Central Government employees in housing complexes called ‘General Pool Residential Accommodation’ (GPRA), located in prime parcels of the city. These residential colonies are now undergoing redevelopment at a massive scale, significantly impacting the ecology of the surrounding areas. Essentially, these colonies were low-rise, low-density planned developments with a dense tree cover and minimal parking requirements. But with increasing urbanisation and spike in parking demand, the proposed built form is an aggregate of high-rise gated complexes, redefining the skyline of the city which is a huge departure from the mediocre setup of Low-rise Walk-up apartments. The complexity of these developments is further aggravated by the need for parking which necessitates cutting huge number of trees to accommodate multiple layers of parking beneath the structures thus sidelining the authentic character of these areas which is laden with a dense tree cover. The aftermath of this whole process is the generation of a huge carbon footprint on the surrounding areas, which is unaccounted for, in the planning and design practice. These developments are currently planned as mix-use compounds with large commercial built-up spaces which have additional parking requirements over and above the residential parking. Also, they are perceived as gated complexes and not as neighborhood units, thus project isolated images of high-rise, dense systems with little context to the surroundings. The paper would analyze case studies of GPRA Redevelopment projects in Delhi, and the lack of relevant development control regulations which have led to abnormalities and complications in the entire redevelopment process. It would also suggest policy guidelines which can establish comprehensive codes for effective planning of these settlements.

Keywords: gated complexes, GPRA Redevelopment projects, increased densities, huge carbon footprint, mixed-use development

Procedia PDF Downloads 120
11086 Development of Wide Bandgap Semiconductor Based Particle Detector

Authors: Rupa Jeena, Pankaj Chetry, Pradeep Sarin

Abstract:

The study of fundamental particles and the forces governing them has always remained an attractive field of theoretical study to pursue. With the advancement and development of new technologies and instruments, it is possible now to perform particle physics experiments on a large scale for the validation of theoretical predictions. These experiments are generally carried out in a highly intense beam environment. This, in turn, requires the development of a detector prototype possessing properties like radiation tolerance, thermal stability, and fast timing response. Semiconductors like Silicon, Germanium, Diamond, and Gallium Nitride (GaN) have been widely used for particle detection applications. Silicon and germanium being narrow bandgap semiconductors, require pre-cooling to suppress the effect of noise by thermally generated intrinsic charge carriers. The application of diamond in large-scale experiments is rare owing to its high cost of fabrication, while GaN is one of the most extensively explored potential candidates. But we are aiming to introduce another wide bandgap semiconductor in this active area of research by considering all the requirements. We have made an attempt by utilizing the wide bandgap of rutile Titanium dioxide (TiO2) and other properties to use it for particle detection purposes. The thermal evaporation-oxidation (in PID furnace) technique is used for the deposition of the film, and the Metal Semiconductor Metal (MSM) electrical contacts are made using Titanium+Gold (Ti+Au) (20/80nm). The characterization comprising X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM), Ultraviolet (UV)-Visible spectroscopy, and Laser Raman Spectroscopy (LRS) has been performed on the film to get detailed information about surface morphology. On the other hand, electrical characterizations like Current Voltage (IV) measurement in dark and light and test with laser are performed to have a better understanding of the working of the detector prototype. All these preliminary tests of the detector will be presented.

Keywords: particle detector, rutile titanium dioxide, thermal evaporation, wide bandgap semiconductors

Procedia PDF Downloads 73
11085 Pilot Scale Sub-Surface Constructed Wetland: Evaluation of Performance of Bed Vegetated with Water Hyacinth in the Treatment of Domestic Sewage

Authors: Abdul-Hakeem Olatunji Abiola, A. E. Adeniran, A. O. Ajimo, A. B. Lamilisa

Abstract:

Introduction: Conventional wastewater treatment technology has been found to fail in developing countries because they are expensive to construct, operate and maintain. Constructed wetlands are nowadays considered as a low-cost alternative for effective wastewater treatment, especially where suitable land can be available. This study aims to evaluate the performance of the constructed wetland vegetated with water hyacinth (Eichhornia crassipes) plant for the treatment of wastewater. Methodology: The sub-surface flow wetland used for this study was an experimental scale constructed wetland consisting of four beds A, B, C, and D. Beds A, B, and D were vegetated while bed C which was used as a control was non-vegetated. This present study presents the results from bed B vegetated with water hyacinth (Eichhornia crassipes) and control bed C which was non-vegetated. The influent of the experimental scale wetland has been pre-treated with sedimentation, screening and anaerobic chamber before feeding into the experimental scale wetland. Results: pH and conductivity level were more reduced, colour of effluent was more improved, nitrate, iron, phosphate, and chromium were more removed, and dissolved oxygen was more improved in the water hyacinth bed than the control bed. While manganese, nickel, cyanuric acid, and copper were more removed from the control bed than the water hyacinth bed. Conclusion: The performance of the experimental scale constructed wetland bed planted with water hyacinth (Eichhornia crassipes) is better than that of the control bed. It is therefore recommended that plain bed without any plant should not be encouraged.

Keywords: constructed experimental scale wetland, domestic sewage, treatment, water hyacinth

Procedia PDF Downloads 127
11084 Rapid Identification of Thermophilic Campylobacter Species from Retail Poultry Meat Using Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry

Authors: Graziella Ziino, Filippo Giarratana, Stefania Maria Marotta, Alessandro Giuffrida, Antonio Panebianco

Abstract:

In Europe, North America and Japan, campylobacteriosis is one of the leading food-borne bacterial illnesses, often related to the consumption of poultry meats and/or by-products. The aim of this study was the evaluation of Campylobacter contamination of poultry meats marketed in Sicily (Italy) using both traditional methods and Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS). MALDI-TOF MS is considered a promising rapid (less than 1 hour) identification method for food borne pathogens bacteria. One hundred chicken and turkey meat preparations (no. 68 hamburgers, no. 21 raw sausages, no. 4 meatballs and no. 7 meat rolls) were taken from different butcher’s shops and large scale retailers and submitted to detection/enumeration of Campylobacter spp. according to EN ISO 10272-1:2006 and EN ISO 10272-2:2006. Campylobacter spp. was detected with general low counts in 44 samples (44%), of which 30 from large scale retailers and 14 from butcher’s shops. Chicken meats were significantly more contaminated than turkey meats. Among the preparations, Campylobacter spp. was found in 85.71% of meat rolls, 50% of meatballs, 44.12% of hamburgers and 28.57% of raw sausages. A total of 100 strains, 2-3 from each positive samples, were isolated for the identification by phenotypic, biomolecular and MALDI-TOF MS methods. C. jejuni was the predominant strains (63%), followed by C. coli (33%) and C. lari (4%). MALDI-TOF MS correctly identified 98% of the strains at the species level, only 1% of the tested strains were not identified. In the last 1%, a mixture of two different species was mixed in the same sample and MALDI-TOF MS correctly identified at least one of the strains. Considering the importance of rapid identification of pathogens in the food matrix, this method is highly recommended for the identification of suspected colonies of Campylobacteria.

Keywords: campylobacter spp., Food Microbiology, matrix-assisted laser desorption ionization-time of flight mass spectrometry, rapid microbial identification

Procedia PDF Downloads 281
11083 Factors Affecting the Mental and Physical Health of Nurses during the Outbreak of COVID-19: A Case Study of a Hospital in Mashhad

Authors: Ghorbanali Mohammadi

Abstract:

Background: Due to the widespread outbreak of the COVID-19 virus, a large number of people become infected with the disease every day and go to hospitals. The acute condition of this disease has caused the death of many people. Since all the stages of treatment for these people happen in the hospitals, nurses are at the forefront of the fight against this virus. This causes nurses to suffer from physical and mental health problems. Methods: Physical and mental problems in nurses were assessed using the Depression, Anxiety and Stress Scale (DASS-42) of Lovibond (1995) and the Nordic Questionnaire. Results: 90 nurses from emergency, intensive care, and coronary care units were examined, and a total of 180 questionnaires were collected and evaluated. It was found that 37.78%, 47.78%, and 21.11% of nurses have symptoms of depression, anxiety, and stress, respectively. 40% of the nurses had physical problems. In total, 65.17% of them were involved in one or more mental or physical illnesses. Conclusions: Of the three units surveyed, the nurses in intensive care, emergency room, and coronary care units worked more than ten hours a day. Examining the interaction of physical and mental health problems indicated that physical problems can aggravate mental problems.

Keywords: depression anxiety and stress scale of Lovibond, nordic questionnaire, mental health of nurses, physical health problems in nurses

Procedia PDF Downloads 114
11082 A New Social Vulnerability Index for Evaluating Social Vulnerability to Climate Change at the Local Scale

Authors: Cuong V Nguyen, Ralph Horne, John Fien, France Cheong

Abstract:

Social vulnerability to climate change is increasingly being acknowledged, and proposals to measure and manage it are emerging. Building upon this work, this paper proposes an approach to social vulnerability assessment using a new mechanism to aggregate and account for causal relationships among components of a Social Vulnerability Index (SVI). To operationalize this index, the authors propose a means to develop an appropriate primary dataset, through application of a specifically-designed household survey questionnaire. The data collection and analysis, including calibration and calculation of the SVI is demonstrated through application in case study city in central coastal Vietnam. The calculation of SVI at the fine-grained local neighbourhood scale provides high resolution in vulnerability assessment, and also obviates the need for secondary data, which may be unavailable or problematic, particularly at the local scale in developing countries. The SVI household survey is underpinned by the results of a Delphi survey, an in-depth interview and focus group discussions with local environmental professionals and community members. The research reveals inherent limitations of existing SVIs but also indicates the potential for their use in assessing social vulnerability and making decisions associated with responding to climate change at the local scale.

Keywords: climate change, local scale, social vulnerability, social vulnerability index

Procedia PDF Downloads 426
11081 The Relationship between Fluctuation of Biological Signal: Finger Plethysmogram in Conversation and Anthropophobic Tendency

Authors: Haruo Okabayashi

Abstract:

Human biological signals (pulse wave and brain wave, etc.) have a rhythm which shows fluctuations. This study investigates the relationship between fluctuations of biological signals which are shown by a finger plethysmogram (i.e., finger pulse wave) in conversation and anthropophobic tendency, and identifies whether the fluctuation could be an index of mental health. 32 college students participated in the experiment. The finger plethysmogram of each subject was measured in the following conversation situations: Fun memory talking/listening situation and regrettable memory talking/ listening situation for three minutes each. Lyspect 3.5 was used to collect the data of the finger plethysmogram. Since Lyspect calculates the Lyapunov spectrum, it is possible to obtain the largest Lyapunov exponent (LLE). LLE is an indicator of the fluctuation and shows the degree to which a measure is going away from close proximity to the track in a dynamical system. Before the finger plethysmogram experiment, each participant took the psychological test questionnaire “Anthropophobic Scale.” The scale measures the social phobia trend close to the consciousness of social phobia. It is revealed that there is a remarkable relationship between the fluctuation of the finger plethysmography and anthropophobic tendency scale in talking about a regrettable story in conversation: The participants (N=15) who have a low anthropophobic tendency show significantly more fluctuation of finger pulse waves than the participants (N=17) who have a high anthropophobic tendency (F (1, 31) =5.66, p<0.05). That is, the participants who have a low anthropophobic tendency make conversation flexibly using large fluctuation of biological signal; on the other hand, the participants who have a high anthropophobic tendency constrain a conversation because of small fluctuation. Therefore, fluctuation is not an error but an important drive to make better relationships with others and go towards the development of interaction. In considering mental health, the fluctuation of biological signals would be an important indicator.

Keywords: anthropophobic tendency, finger plethymogram, fluctuation of biological signal, LLE

Procedia PDF Downloads 234