Search results for: parallel particle swarm optimization
3750 Proposition of an Intelligent System Based on the Augmented Reality for Warehouse Logistics
Authors: Safa Gharbi, Hayfa Zgaya, Nesrine Zoghlami, Slim Hammadi, Cyril De Barbarin, Laurent Vinatier, Christiane Coupier
Abstract:
Increasing productivity and quality of service, improving the working comfort and ensuring the efficiency of all processes are important challenges for every warehouse. The order picking is recognized to be the most important and costly activity of all the process in warehouses. This paper presents a new approach using Augmented Reality (AR) in the field of logistics. It aims to create a Head-Up Display (HUD) interface with a Warehouse Management System (WMS), using AR glasses. Integrating AR technology allows the optimization of order picking by reducing time of picking process, increasing the efficiency and delivering quickly. The picker will be able to access immediately to all the information needed for his tasks. All the information is displayed when needed in the field of vision (FOV) of the operator, without any action requested from him. These research works are part of the industrial project RASL (Réalité Augmentée au Service de la Logistique) which gathers two major partners: the LAGIS (Laboratory of Automatics, Computer Engineering and Signal Processing in Lille-France) and Genrix Group, European leader in warehouses logistics, who provided his software for implementation, and his logistics expertise.Keywords: Augmented Reality (AR), logistics and optimization, Warehouse Management System (WMS), Head-Up Display (HUD)
Procedia PDF Downloads 4813749 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism
Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le
Abstract:
This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.Keywords: flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment
Procedia PDF Downloads 3293748 Assignment of Airlines Technical Members under Disruption
Authors: Walid Moudani
Abstract:
The Crew Reserve Assignment Problem (CRAP) considers the assignment of the crew members to a set of reserve activities covering all the scheduled flights in order to ensure a continuous plan so that operations costs are minimized while its solution must meet hard constraints resulting from the safety regulations of Civil Aviation as well as from the airlines internal agreements. The problem considered in this study is of highest interest for airlines and may have important consequences on the service quality and on the economic return of the operations. In this communication, a new mathematical formulation for the CRAP is proposed which takes into account the regulations and the internal agreements. While current solutions make use of Artificial Intelligence techniques run on main frame computers, a low cost approach is proposed to provide on-line efficient solutions to face perturbed operating conditions. The proposed solution method uses a dynamic programming approach for the duties scheduling problem and when applied to the case of a medium airline while providing efficient solutions, shows good potential acceptability by the operations staff. This optimization scheme can then be considered as the core of an on-line Decision Support System for crew reserve assignment operations management.Keywords: airlines operations management, combinatorial optimization, dynamic programming, crew scheduling
Procedia PDF Downloads 3533747 A Comparative Analysis of Heuristics Applied to Collecting Used Lubricant Oils Generated in the City of Pereira, Colombia
Authors: Diana Fajardo, Sebastián Ortiz, Oscar Herrera, Angélica Santis
Abstract:
Currently, in Colombia is arising a problem related to collecting used lubricant oils which are generated by the increment of the vehicle fleet. This situation does not allow a proper disposal of this type of waste, which in turn results in a negative impact on the environment. Therefore, through the comparative analysis of various heuristics, the best solution to the VRP (Vehicle Routing Problem) was selected by comparing costs and times for the collection of used lubricant oils in the city of Pereira, Colombia; since there is no presence of management companies engaged in the direct administration of the collection of this pollutant. To achieve this aim, six proposals of through methods of solution of two phases were discussed. First, the assignment of the group of generator points of the residue was made (previously identified). Proposals one and four of through methods are based on the closeness of points. The proposals two and five are using the scanning method and the proposals three and six are considering the restriction of the capacity of collection vehicle. Subsequently, the routes were developed - in the first three proposals by the Clarke and Wright's savings algorithm and in the following proposals by the Traveling Salesman optimization mathematical model. After applying techniques, a comparative analysis of the results was performed and it was determined which of the proposals presented the most optimal values in terms of the distance, cost and travel time.Keywords: Heuristics, optimization Model, savings algorithm, used vehicular oil, V.R.P.
Procedia PDF Downloads 4123746 Preparation and in vivo Assessment of Nystatin-Loaded Solid Lipid Nanoparticles for Topical Delivery against Cutaneous Candidiasis
Authors: Rawia M. Khalil, Ahmed A. Abd El Rahman, Mahfouz A. Kassem, Mohamed S. El Ridi, Mona M. Abou Samra, Ghada E. A. Awad, Soheir S. Mansy
Abstract:
Solid lipid nanoparticles (SLNs) have gained great attention for the topical treatment of skin associated fungal infection as they facilitate the skin penetration of loaded drugs. Our work deals with the preparation of nystatin loaded solid lipid nanoparticles (NystSLNs) using the hot homogenization and ultrasonication method. The prepared NystSLNs were characterized in terms of entrapment efficiency, particle size, zeta potential, transmission electron microscopy, differential scanning calorimetry, rheological behavior and in vitro drug release. A stability study for 6 months was performed. A microbiological study was conducted in male rats infected with Candida albicans, by counting the colonies and examining the histopathological changes induced on the skin of infected rats. The results showed that SLNs dispersions are spherical in shape with particle size ranging from 83.26±11.33 to 955.04±1.09 nm. The entrapment efficiencies are ranging from 19.73±1.21 to 72.46±0.66% with zeta potential ranging from -18.9 to -38.8 mV and shear-thinning rheological Behavior. The stability studies done for 6 months showed that nystatin (Nyst) is a good candidate for topical SLN formulations. A least number of colony forming unit/ ml (cfu/ml) was recorded for the selected NystSLN compared to the drug solution and the commercial Nystatin® cream present in the market. It can be fulfilled from this work that SLNs provide a good skin targeting effect and may represent promising carrier for topical delivery of Nyst offering the sustained release and maintaining the localized effect, resulting in an effective treatment of cutaneous fungal infection.Keywords: candida infections, hot homogenization, nystatin, solid lipid nanoparticles, stability, topical delivery
Procedia PDF Downloads 3903745 Optimization the Multiplicity of Infection for Large Produce of Lytic Bacteriophage pAh6-C
Authors: Sang Guen Kim, Sib Sankar Giri, Jin Woo Jun, Saekil Yun, Hyoun Joong Kim, Sang Wha Kim, Jung Woo Kang, Se Jin Han, Se Chang Park
Abstract:
Emerging of the super bacteria, bacteriophages are considered to be as an alternative to antibiotics. As the demand of phage increased, economical and large production of phage is becoming one of the critical points. For the therapeutic use, what is important is to eradicate the pathogenic bacteria as fast as possible, so higher concentration of phages is generally needed for effective therapeutic function. On the contrary, for the maximum production, bacteria work as a phage producing factory. As a microbial cell factory, bacteria is needed to last longer producing the phages without eradication. Consequently, killing the bacteria fast has a negative effect on large production. In this study, Multiplicity of Infection (MOI) was manipulated based on initial bacterial inoculation and used phage pAh-6C which has therapeutic effect against Aeromonas hydrophila. 1, 5 and 10 percent of overnight bacterial culture was inoculated and each bacterial culture was co-cultured with the phage of which MOI of 0.01, 0.0001, and 0.000001 respectively. Simply changing the initial MOI as well as bacterial inoculation concentration has regulated the production quantity of the phage without any other changes to culture conditions. It is anticipated that this result can be used as a foundational data for mass production of lytic bacteriophages which can be used as the therapeutic bio-control agent.Keywords: bacteriophage, multiplicity of infection, optimization, Aeromonas hydrophila
Procedia PDF Downloads 3063744 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 1613743 Optimization of Fermentation Parameters for Bioethanol Production from Waste Glycerol by Microwave Induced Mutant Escherichia coli EC-MW (ATCC 11105)
Authors: Refal Hussain, Saifuddin M. Nomanbhay
Abstract:
Glycerol is a valuable raw material for the production of industrially useful metabolites. Among many promising applications for the use of glycerol is its bioconversion to high value-added compounds, such as bioethanol through microbial fermentation. Bioethanol is an important industrial chemical with emerging potential as a biofuel to replace vanishing fossil fuels. The yield of liquid fuel in this process was greatly influenced by various parameters viz, temperature, pH, glycerol concentration, organic concentration, and agitation speed were considered. The present study was undertaken to investigate optimum parameters for bioethanol production from raw glycerol by immobilized mutant Escherichia coli (E.coli) (ATCC11505) strain on chitosan cross linked glutaraldehyde optimized by Taguchi statistical method in shake flasks. The initial parameters were set each at four levels and the orthogonal array layout of L16 (45) conducted. The important controlling parameters for optimized the operational fermentation was temperature 38 °C, medium pH 6.5, initial glycerol concentration (250 g/l), and organic source concentration (5 g/l). Fermentation with optimized parameters was carried out in a custom fabricated shake flask. The predicted value of bioethanol production under optimized conditions was (118.13 g/l). Immobilized cells are mainly used for economic benefits of continuous production or repeated use in continuous as well as in batch mode.Keywords: bioethanol, Escherichia coli, immobilization, optimization
Procedia PDF Downloads 6523742 Alkali Activated Materials Based on Natural Clay from Raciszyn
Authors: Michal Lach, Maria Hebdowska-Krupa, Justyna Stefanek, Artur Stanek, Anna Stefanska, Janusz Mikula, Marek Hebda
Abstract:
Limited resources of raw materials determine the necessity of obtaining materials from other sources. In this area, the most known and widespread are recycling processes, which are mainly focused on the reuse of material. Another possible solution used in various companies to achieve improvement in sustainable development is waste-free production. It involves the production exclusively from such materials, whose waste is included in the group of renewable raw materials. This means that they can: (i) be recycled directly during the manufacturing process of further products or (ii) be raw material obtained by other companies for the production of alternative products. The article presents the possibility of using post-production clay from the Jurassic limestone deposit "Raciszyn II" as a raw material for the production of alkali activated materials (AAM). Such products are currently increasingly used, mostly in various building applications. However, their final properties depend significantly on many factors; the most important of them are: chemical composition of the raw material, particle size, specific surface area, type and concentration of the activator and the temperature range of the heat treatment. Conducted mineralogical and chemical analyzes of clay from the “Raciszyn II” deposit confirmed that this material, due to its high content of aluminosilicates, can be used as raw material for the production of AAM. In order to obtain the product with the best properties, the optimization of the clay calcining process was also carried out. Based on the obtained results, it was found that this process should occur in the range between 750 oC and 800 oC. The use of a lower temperature causes getting a raw material with low metakaolin content which is the main component of materials suitable for alkaline activation processes. On the other hand, higher heat treatment temperatures cause thermal dissociation of large amounts of calcite, which is associated with the release of large amounts of CO2 and the formation of calcium oxide. This compound significantly accelerates the binding process, which consequently often prevents the correct formation of geopolymer mass. The effect of the use of various activators: (i) NaOH, (ii) KOH and (iii) a mixture of KOH to NaOH in a ratio of 10%, 25% and 50% by volume on the compressive strength of the AAM was also analyzed. Obtained results depending on the activator used were in the range from 25 MPa to 40 MPa. These values are comparable with the results obtained for materials produced on the basis of Portland cement, which is one of the most popular building materials.Keywords: alkaline activation, aluminosilicates, calcination, compressive strength
Procedia PDF Downloads 1523741 Light and Scanning Electron Microscopic Studies on Corneal Ontogeny in Buffalo
Authors: M. P. S. Tomar, Neelam Bansal
Abstract:
Histomorphological, histochemical and scanning electron microscopic observations were recorded in developing cornea of buffalo fetuses. The samples from fetal cornea were collected in appropriate fixative from slaughter house and Veterinary Clinics, GADVASU, Ludhiana. The microscopic slides were stained for detailed histomorphological and histochemical studies. The scanning electron microscopic studies were performed at Electron microscopy & Nanobiology Lab, PAU Ludhiana. In present study, it was observed that, in 36 days (d) fetus, the corneal epithelium was well marked single layered structure which was placed on stroma mesenchyme. Cornea appeared as the continuation of developing sclera. The thickness of cornea and its epithelium increased as well as the epithelium started becoming double layered in 47d fetus at corneo-scleral junction. The corneal thickness in this stage suddenly increased thus easily distinguished from developing sclera. The separation of corneal endothelium from stroma was evident as a single layered epithelium. The stroma possessed numerous fibroblasts in 49d stage eye. Descemet’s membrane was appeared at 52d stage. The limbus area was separated by a depression from the developing cornea in 61d stage. In 65d stage, the Bowman’s layer was more developed. Fibroblasts were arranged parallel to each other as well as parallel to the surface of developing cornea in superficial layers. These fibroblasts and fibers were arranged in wavy pattern in the center of stroma. Corneal epithelium started to be stratified as a double layered epithelium was present in this age of fetal eye. In group II (>120 Days), the corneal epithelium was stratified towards a well marked irido-corneal angle. The stromal fibroblasts followed a complete parallel arrangement in its entire thickness. In full term fetuses, a well developed cornea was observed. It was a fibrous layer which had five distinct layers. From outside to inwards were described as the outer most layer was the 7-8 layered corneal epithelial, subepithelial basement membrane (Bowman’s membrane), substantia propria or stroma, posterior limiting membrane (Descemet’s membrane) and the posterior epithelium (corneal endothelium). The corneal thickness and connective tissue elements were continued to be increased. It was 121.39 + 3.73µ at 36d stage which increased to 518.47 + 4.98 µ in group III fetuses. In fetal life, the basement membrane of corneal epithelium and endothelium depicted strong to intense periodic Acid Schiff’s (PAS) reaction. At the irido-corneal angle, the endothelium of blood vessels was also positive for PAS activity. However, cornea was found mild positive for alcian blue reaction. The developing cornea showed strong reaction for basic proteins in outer epithelium and the inner endothelium layers. Under low magnification scanning electron microscope, cornea showed two types of cells viz. light cells and dark cells. The light cells were smaller in size and had less number of microvilli in their surface than in the dark cells. Despite these surface differences between light and dark cells, the corneal surface showed the same general pattern of microvilli studding all exposed surfaces out to the cell margin. which were long (with variable height), slight tortuous slender and possessed a micro villus shaft with a very prominent knob.Keywords: buffalo, cornea, eye, fetus, ontogeny, scanning electron microscopy
Procedia PDF Downloads 1493740 Optimization of Pregelatinized Taro Boloso-I Starch as a Direct Compression Tablet Excipient
Authors: Tamrat Balcha Balla
Abstract:
Background: Tablets are still the most preferred means of drug delivery. The search for new and improved direct compression tablet excipients is an area of research focus. Taro Boloso-I is a variety of Colocasia esculenta (L. Schott) yielding 67% more than the other varieties (Godare) in Ethiopia. This study aimed to enhance the flowability while keeping the compressibility and compactibility of the pregelatinized Taro Boloso-I starch. Methods: Central composite design was used for the optimization of two factors which were the temperature and duration of pregelatinization against 5 responses. The responses were angle of repose, Hausner ratio, Kawakita compressibility index, mean yield pressure and tablet breaking force. Results and Discussions: An increase in both temperature and time resulted in decrease in the angle of repose. The increase in temperature was shown to decrease the Hausner ratio and to decrease the Kawakita compressibility index. The mean yield pressure was observed to increase with increasing levels of both temperature and time. The pregelatinized (optimized) Taro Boloso-I starch could show desired flow property and compressibility. Conclusions: Pregelatinized Taro Boloso - I starch could be regarded as a potential direct compression excipient in terms of flowability, compressibility and compactibility.Keywords: starch, compression, pregelatinization, Taro Boloso-I
Procedia PDF Downloads 1123739 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 1603738 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction
Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz
Abstract:
Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm:3737 Energy Conservation Strategies of Buildings in Hot, Arid Region: Al-Khobar, Saudi Arabia
Authors: M. H. Shwehdi, S. Raja Mohammad
Abstract:
Recently energy savings have become more pronounced as a result of the world financial crises as well the unstable oil prices. Certainly all entities needs to adapt Energy Conservation and Management Strategies due to high monthly consumption of their spread locations and advancements of its telecom systems. These system improvements necessitate the establishment of more exchange centers as well provide energy savings. This paper investigates the impact of HVAC System Characteristics, Operational Strategies, the impact of Envelope Thermal Characteristics, and energy conservation measures. These are classified under three types of measures i.e. Zero-Investment; Low-Investment and High-Investment Energy Conservation Measures. The study shows that the Energy Conservation Measures (ECMs) pertaining to the HVAC system characteristics and operation represent the highest potential for energy reduction, attention should be given to window thermal and solar radiation characteristics when large window areas are used. The type of glazing system needs to be carefully considered in the early design phase of future buildings. Paper will present the thermal optimization of different size centers in the two hot-dry and hot-humid Saudi Arabian city of Al Khobar, East province.Keywords: energy conservation, optimization, thermal design, intermittent operation, exchange centers, hot-humid climate, Saudi Arabia
Procedia PDF Downloads 4483736 Maintenance Objective-Based Asset Maintenance Maturity Model
Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Peter Chemweno
Abstract:
The fast-changing business and operational environment are forcing organizations to adopt asset performance management strategies, not only to reduce costs but also maintain operational and production policies while addressing demand. To attain optimal asset performance management, a framework that ensures a continuous and systematic approach to analyzing an organization’s current maturity level and expected improvement regarding asset maintenance processes, strategies, technologies, capabilities, and systems is essential. Moreover, this framework while addressing maintenance-intensive organizations should consider the diverse business, operational and technical context (often dynamic) an organization is in and realistically prescribe or relate to the appropriate tools and systems the organization can potentially employ in the respective level, to improve and attain their maturity goals. This paper proposes an asset maintenance maturity model to assess the current capabilities, strength and weaknesses of maintenance processes an organization is using and analyze gaps for improvement via structuring set levels of achievement. At the epicentre of the proposed framework is the utilization of maintenance objective selected by an organization for various maintenance optimization programs. The framework adapts the Capability Maturity Model of assessing the maintenance process maturity levels in the organization.Keywords: asset maintenance, maturity models, maintenance objectives, optimization
Procedia PDF Downloads 2243735 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 2693734 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 1173733 Cloud Design for Storing Large Amount of Data
Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás
Abstract:
Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization
Procedia PDF Downloads 3513732 Optimization Financial Technology through E-Money PayTren Application: Reducing Poverty in Indonesia with a System Direct Sales Tiered Sharia
Authors: Erwanda Nuryahya, Aas Nurasyiah, Sri Yayu Ninglasari
Abstract:
Indonesia is the fourth most populous country that still has many troubles in its development. One of the problems which is very important and unresolved is poverty. Limited job opportunity is one unresolved cause of it until today. The purpose of making this scientific paper is to know benefits of E-Money Paytren Application to enhance its partners’ income, owned by company Veritra Sentosa International. The methodology used here is the quantitative and qualitative descriptive method by case study approach. The data used are primary and secondary data. The primary data is obtained from interviews and observation to company Veritra Sentosa International and the distribution of 400 questionnaires to Paytren partner. Secondary data is obtained from the literature study and documentary. The result is that the Paytren with a system direct sales tiered syariah proven able to enhance its partners’ income. Therefore, the Optimization Financial Technology through E-Money Paytren Application should be utilized by Indonesians because it is proven that it is able to increase the income of the partners. Therefore, Paytren Application is very useful for the government, the sharia financial industry, and society in reducing poverty in Indonesia.Keywords: e-money PayTren application, financial technology, poverty, direct sales tiered Sharia
Procedia PDF Downloads 1373731 Optimization of a Convolutional Neural Network for the Automated Diagnosis of Melanoma
Authors: Kemka C. Ihemelandu, Chukwuemeka U. Ihemelandu
Abstract:
The incidence of melanoma has been increasing rapidly over the past two decades, making melanoma a current public health crisis. Unfortunately, even as screening efforts continue to expand in an effort to ameliorate the death rate from melanoma, there is a need to improve diagnostic accuracy to decrease misdiagnosis. Artificial intelligence (AI) a new frontier in patient care has the ability to improve the accuracy of melanoma diagnosis. Convolutional neural network (CNN) a form of deep neural network, most commonly applied to analyze visual imagery, has been shown to outperform the human brain in pattern recognition. However, there are noted limitations with the accuracy of the CNN models. Our aim in this study was the optimization of convolutional neural network algorithms for the automated diagnosis of melanoma. We hypothesized that Optimal selection of the momentum and batch hyperparameter increases model accuracy. Our most successful model developed during this study, showed that optimal selection of momentum of 0.25, batch size of 2, led to a superior performance and a faster model training time, with an accuracy of ~ 83% after nine hours of training. We did notice a lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone. Training set image transformations did not result in a superior model performance in our study.Keywords: melanoma, convolutional neural network, momentum, batch hyperparameter
Procedia PDF Downloads 993730 Methodology of Preliminary Design and Performance of a Axial-Flow Fan through CFD
Authors: Ramiro Gustavo Ramirez Camacho, Waldir De Oliveira, Eraldo Cruz Dos Santos, Edna Raimunda Da Silva, Tania Marie Arispe Angulo, Carlos Eduardo Alves Da Costa, Tânia Cristina Alves Dos Reis
Abstract:
It presents a preliminary design methodology of an axial fan based on the lift wing theory and the potential vortex hypothesis. The literature considers a study of acoustic and engineering expertise to model a fan with low noise. Axial fans with inadequate intake geometry, often suffer poor condition of the flow at the entrance, varying from velocity profiles spatially asymmetric to swirl floating with respect to time, this produces random forces acting on the blades. This produces broadband gust noise which in most cases triggers the tonal noise. The analysis of the axial flow fan will be conducted for the solution of the Navier-Stokes equations and models of turbulence in steady and transitory (RANS - URANS) 3-D, in order to find an efficient aerodynamic design, with low noise and suitable for industrial installation. Therefore, the process will require the use of computational optimization methods, aerodynamic design methodologies, and numerical methods as CFD- Computational Fluid Dynamics. The objective is the development of the methodology of the construction axial fan, provide of design the geometry of the blade, and evaluate aerodynamic performanceKeywords: Axial fan design, CFD, Preliminary Design, Optimization
Procedia PDF Downloads 3943729 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization
Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın
Abstract:
There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.Keywords: aircraft, fatigue, joint, life, optimization, prediction.
Procedia PDF Downloads 1753728 Optimization of Cutting Parameters on Delamination Using Taguchi Method during Drilling of GFRP Composites
Authors: Vimanyu Chadha, Ranganath M. Singari
Abstract:
Drilling composite materials is a frequently practiced machining process during assembling in various industries such as automotive and aerospace. However, drilling of glass fiber reinforced plastic (GFRP) composites is significantly affected by damage tendency of these materials under cutting forces such as thrust force and torque. The aim of this paper is to investigate the influence of the various cutting parameters such as cutting speed and feed rate; subsequently also to study the influence of number of layers on delamination produced while drilling a GFRP composite. A plan of experiments, based on Taguchi techniques, was instituted considering drilling with prefixed cutting parameters in a hand lay-up GFRP material. The damage induced associated with drilling GFRP composites were measured. Moreover, Analysis of Variance (ANOVA) was performed to obtain minimization of delamination influenced by drilling parameters and number layers. The optimum drilling factor combination was obtained by using the analysis of signal-to-noise ratio. The conclusion revealed that feed rate was the most influential factor on the delamination. The best results of the delamination were obtained with composites with a greater number of layers at lower cutting speeds and feed rates.Keywords: analysis of variance, delamination, design optimization, drilling, glass fiber reinforced plastic composites, Taguchi method
Procedia PDF Downloads 2543727 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry
Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu
Abstract:
The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS
Procedia PDF Downloads 2783726 Strategic Mine Planning: A SWOT Analysis Applied to KOV Open Pit Mine in the Democratic Republic of Congo
Authors: Patrick May Mukonki
Abstract:
KOV pit (Kamoto Oliveira Virgule) is located 10 km from Kolwezi town, one of the mineral rich town in the Lualaba province of the Democratic Republic of Congo. The KOV pit is currently operating under the Katanga Mining Limited (KML), a Glencore-Gecamines (a State Owned Company) join venture. Recently, the mine optimization process provided a life of mine of approximately 10 years withnice pushbacks using the Datamine NPV Scheduler software. In previous KOV pit studies, we recently outlined the impact of the accuracy of the geological information on a long-term mine plan for a big copper mine such as KOV pit. The approach taken, discussed three main scenarios and outlined some weaknesses on the geological information side, and now, in this paper that we are going to develop here, we are going to highlight, as an overview, those weaknesses, strengths and opportunities, in a global SWOT analysis. The approach we are taking here is essentially descriptive in terms of steps taken to optimize KOV pit and, at every step, we categorized the challenges we faced to have a better tradeoff between what we called strengths and what we called weaknesses. The same logic is applied in terms of the opportunities and threats. The SWOT analysis conducted in this paper demonstrates that, despite a general poor ore body definition, and very rude ground water conditions, there is room for improvement for such high grade ore body.Keywords: mine planning, mine optimization, mine scheduling, SWOT analysis
Procedia PDF Downloads 2243725 Flood Planning Based on Risk Optimization: A Case Study in Phan-Calo River Basin in Vinh Phuc Province, Vietnam
Authors: Nguyen Quang Kim, Nguyen Thu Hien, Nguyen Thien Dung
Abstract:
Flood disasters are increasing worldwide in both frequency and magnitude. Every year in Vietnam, flood causes great damage to people, property, and environmental degradation. The flood risk management policy in Vietnam is currently updated. The planning of flood mitigation strategies is reviewed to make a decision how to reach sustainable flood risk reduction. This paper discusses the basic approach where the measures of flood protection are chosen based on minimizing the present value of expected monetary expenses, total residual risk and costs of flood control measures. This approach will be proposed and demonstrated in a case study for flood risk management in Vinh Phuc province of Vietnam. Research also proposed the framework to find a solution of optimal protection level and optimal measures of the flood. It provides an explicit economic basis for flood risk management plans and interactive effects of options for flood damage reduction. The results of the case study are demonstrated and discussed which would provide the processing of actions helped decision makers to choose flood risk reduction investment options.Keywords: drainage plan, flood planning, flood risk, residual risk, risk optimization
Procedia PDF Downloads 2413724 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology
Authors: Anjian Chen, Joseph C. Chen
Abstract:
This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing
Procedia PDF Downloads 3913723 Potential Energy Expectation Value for Lithium Excited State (1s2s3s)
Authors: Khalil H. Al-Bayati, G. Nasma, Hussein Ban H. Adel
Abstract:
The purpose of the present work is to calculate the expectation value of potential energyKeywords: lithium excited state, potential energy, 1s2s3s, mathematical physics
Procedia PDF Downloads 4873722 Removal of Chromium (VI) from Aqueous Solution by Teff (Eragrostis Teff) Husk Activated Carbon: Optimization, Kinetics, Isotherm, and Practical Adaptation Study Using Response Surface Methodology
Authors: Tsegaye Adane Birhan
Abstract:
Recently, rapid industrialization has led to the excessive release of heavy metals such as Cr (VI) into the environment. Exposure to chromium (VI) can cause kidney and liver damage, depressed immune systems, and a variety of cancers. Therefore, treatment of Cr (VI) containing wastewater is mandatory. This study aims to optimize the removal of Cr (VI) from an aqueous solution using locally available Teff husk-activated carbon adsorbent. The laboratory-based study was conducted on the optimization of Cr (VI) removal efficiency of Teff husk-activated carbon from aqueous solution. A central composite design was used to examine the effect of the interaction of process parameters and to optimize the process using Design Expert version 7.0 software. The optimized removal efficiency of Teff husk activated carbon (95.597%) was achieved at 1.92 pH, 87.83mg/L initial concentration, 20.22g/L adsorbent dose and 2.07Hrs contact time. The adsorption of Cr (VI) on Teff husk-activated carbon was found to be best fitted with pseudo-second-order kinetics and Langmuir isotherm model of the adsorption. Teff husk-activated carbon can be used as an efficient adsorbent for the removal of chromium (VI) from contaminated water. Column adsorption needs to be studied in the future.Keywords: batch adsorption, chromium (VI), teff husk activated carbon, response surface methodology, tannery wastewater
Procedia PDF Downloads 43721 Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks
Authors: Ali Shareef, Aliha Shareef, Yifeng Zhu
Abstract:
Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91% of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.Keywords: wireless sensor network, Energy Efficient Routing
Procedia PDF Downloads 390