Search results for: assembly scheduling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 963

Search results for: assembly scheduling

603 Achieving Product Robustness through Variation Simulation: An Industrial Case Study

Authors: Narendra Akhadkar, Philippe Delcambre

Abstract:

In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.

Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation

Procedia PDF Downloads 161
602 Covalently Conjugated Gold–Porphyrin Nanostructures

Authors: L. Spitaleri, C. M. A. Gangemi, R. Purrello, G. Nicotra, G. Trusso Sfrazzetto, G. Casella, M. Casarin, A. Gulino

Abstract:

Hybrid molecular–nanoparticle materials, obtained with a bottom-up approach, are suitable for the fabrication of functional nanostructures showing structural control and well-defined properties, i.e., optical, electronic or catalytic properties, in the perspective of applications in different fields of nanotechnology. Gold nanoparticles (Au NPs) exhibit important chemical, electronic and optical properties due to their size, shape and electronic structures. In fact, Au NPs containing no more than 30-40 atoms are only luminescent because they can be considered as large molecules with discrete energy levels, while nano-sized Au NPs only show the surface plasmon resonance. Hence, it appears that gold nanoparticles can alternatively be luminescent or plasmonic, and this represents a severe constraint for their use as an optical material. The aim of this work was the fabrication of nanoscale assembly of Au NPs covalently anchored to each other by means of novel bi-functional porphyrin molecules that work as bridges between different gold nanoparticles. This functional architecture shows a strong surface plasmon due to the Au nanoparticles and a strong luminescence signal coming from porphyrin molecules, thus, behaving like an artificial organized plasmonic and fluorescent network. The self-assembly geometry of this porphyrin on the Au NPs was studied by investigation of the conformational properties of the porphyrin derivative at the DFT level. The morphology, electronic structure and optical properties of the conjugated Au NPs – porphyrin system were investigated by TEM, XPS, UV–vis and Luminescence. The present nanostructures can be used for plasmon-enhanced fluorescence, photocatalysis, nonlinear optics, etc., under atmospheric conditions since our system is not reactive to air nor water and does not need to be stored in a vacuum or inert gas.

Keywords: gold nanoparticle, porphyrin, surface plasmon resonance, luminescence, nanostructures

Procedia PDF Downloads 149
601 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 162
600 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 168
599 Unveiling the Self-Assembly Behavior and Salt-Induced Morphological Transition of Double PEG-Tailed Unconventional Amphiphiles

Authors: Rita Ghosh, Joykrishna Dey

Abstract:

PEG-based amphiphiles are of tremendous importance for its widespread applications in pharmaceutics, household purposes, and drug delivery. Previously, a number of single PEG-tailed amphiphiles having significant applications have been reported from our group. Therefore, it was of immense interest to explore the properties and application potential of PEG-based double tailed amphiphiles. Herein, for the first time, two novel double PEG-tailed amphiphiles having different PEG chain lengths have been developed. The self-assembly behavior of the newly developed amphiphiles in aqueous buffer (pH 7.0) was thoroughly investigated at 25 oC by a number of techniques including, 1H-NMR, and steady-state and time-dependent fluorescence spectroscopy, dynamic light scattering, transmission electron microscopy, atomic force microscopy, and isothermal titration calorimetry. Despite having two polar PEG chains both molecules were found to have strong tendency to self-assemble in aqueous buffered solution above a very low concentration. Surprisingly, the amphiphiles were shown to form stable vesicles spontaneously at room temperature without any external stimuli. The results of calorimetric measurements showed that the vesicle formation is driven by the hydrophobic effect (positive entropy change) of the system, which is associated with the helix-to-random coil transition of the PEG chain. The spectroscopic data confirmed that the bilayer membrane of the vesicles is constituted by the PEG chains of the amphiphilic molecule. Interestingly, the vesicles were also found to exhibit structural transitions upon addition of salts in solution. These properties of the vesicles enable them as potential candidate for drug delivery.

Keywords: double-tailed amphiphiles, fluorescence, microscopy, PEG, vesicles

Procedia PDF Downloads 116
598 Development of Microsatellite Markers for Dalmatian Pyrethrum Using Next-Generation Sequencing

Authors: Ante Turudic, Filip Varga, Zlatko Liber, Jernej Jakse, Zlatko Satovic, Ivan Radosavljevic, Martina Grdisa

Abstract:

Microsatellites (SSRs) are highly informative repetitive sequences of 2-6 base pairs, which are the most used molecular markers in assessing the genetic diversity of plant species. Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip) is an outcrossing diploid (2n = 18) endemic to the eastern Adriatic coast and source of the natural insecticide pyrethrin. Due to the high repetitiveness and large size of the genome (haploid genome size of 9,58 pg), previous attempts to develop microsatellite markers using the standard methods were unsuccessful. A next-generation sequencing (NGS) approach was applied on genomic DNA extracted from fresh leaves of Dalmatian pyrethrum. The sequencing was conducted using NovaSeq6000 Illumina sequencer, after which almost 400 million high-quality paired-end reads were obtained, with a read length of 150 base pairs. Short reads were assembled by combining two approaches; (1) de-novo assembly and (2) joining of overlapped pair-end reads. In total, 6.909.675 contigs were obtained, with the contig average length of 249 base pairs. Of the resulting contigs, 31.380 contained one or multiple microsatellite sequences, in total 35.556 microsatellite loci were identified. Out of detected microsatellites, dinucleotide repeats were the most frequent, accounting for more than half of all microsatellites identifies (21,212; 59.7%), followed by trinucleotide repeats (9,204; 25.9%). Tetra-, penta- and hexanucleotides had similar frequency of 1,822 (5.1%), 1,472 (4.1%), and 1,846 (5.2%), respectively. Contigs containing microsatellites were further filtered by SSR pattern type, transposon occurrences, assembly characteristics, GC content, and the number of occurrences against the draft genome of T. cinerariifolium published previously. After the selection process, 50 microsatellite loci were used for primer design. Designed primers were tested on samples from five distinct populations, and 25 of them showed a high degree of polymorphism. The selected loci were then genotyped on 20 samples belonging to one population resulting in 17 microsatellite markers. Availability of codominant SSR markers will significantly improve the knowledge on population genetic diversity and structure as well as complex genetics and biochemistry of this species. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: genome assembly, NGS, SSR, Tanacetum cinerariifolium

Procedia PDF Downloads 128
597 Biophysical Consideration in the Interaction of Biological Cell Membranes with Virus Nanofilaments

Authors: Samaneh Farokhirad, Fatemeh Ahmadpoor

Abstract:

Biological membranes are constantly in contact with various filamentous soft nanostructures that either reside on their surface or are being transported between the cell and its environment. In particular, viral infections are determined by the interaction of viruses (such as filovirus) with cell membranes, membrane protein organization (such as cytoskeletal proteins and actin filament bundles) has been proposed to influence the mechanical properties of lipid membranes, and the adhesion of filamentous nanoparticles influence their delivery yield into target cells or tissues. The goal of this research is to integrate the rapidly increasing but still fragmented experimental observations on the adhesion and self-assembly of nanofilaments (including filoviruses, actin filaments, as well as natural and synthetic nanofilaments) on cell membranes into a general, rigorous, and unified knowledge framework. The global outbreak of the coronavirus disease in 2020, which has persisted for over three years, highlights the crucial role that nanofilamentbased delivery systems play in human health. This work will unravel the role of a unique property of all cell membranes, namely flexoelectricity, and the significance of nanofilaments’ flexibility in the adhesion and self-assembly of nanofilaments on cell membranes. This will be achieved utilizing a set of continuum mechanics, statistical mechanics, and molecular dynamics and Monte Carlo simulations. The findings will help address the societal needs to understand biophysical principles that govern the attachment of filoviruses and flexible nanofilaments onto the living cells and provide guidance on the development of nanofilament-based vaccines for a range of diseases, including infectious diseases and cancer.

Keywords: virus nanofilaments, cell mechanics, computational biophysics, statistical mechanics

Procedia PDF Downloads 88
596 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy

Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright

Abstract:

The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.

Keywords: information entropy, communication in manufacturing, mass customisation, scheduling

Procedia PDF Downloads 241
595 Aseismic Stiffening of Architectural Buildings as Preventive Restoration Using Unconventional Materials

Authors: Jefto Terzovic, Ana Kontic, Isidora Ilic

Abstract:

In the proposed design concept, laminated glass and laminated plexiglass, as ”unconventional materials”, are considered as a filling in a steel frame on which they overlap by the intermediate rubber layer, thereby forming a composite assembly. In this way vertical elements of stiffening are formed, capable for reception of seismic force and integrated into the structural system of the building. The applicability of such a system was verified by experiments in laboratory conditions where the experimental models based on laminated glass and laminated plexiglass had been exposed to the cyclic loads that simulate the seismic force. In this way the load capacity of composite assemblies was tested for the effects of dynamic load that was parallel to assembly plane. Thus, the stress intensity to which composite systems might be exposed was determined as well as the range of the structure stiffening referring to the expressed deformation along with the advantages of a particular type of filling compared to the other one. Using specialized software whose operation is based on the finite element method, a computer model of the structure was created and processed in the case study; the same computer model was used for analyzing the problem in the first phase of the design process. The stiffening system based on composite assemblies tested in laboratories is implemented in the computer model. The results of the modal analysis and seismic calculation from the computer model with stiffeners applied showed an efficacy of such a solution, thus rounding the design procedures for aseismic stiffening by using unconventional materials.

Keywords: laminated glass, laminated plexiglass, aseismic stiffening, experiment, laboratory testing, computer model, finite element method

Procedia PDF Downloads 76
594 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 56
593 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 128
592 De novo Transcriptome Assembly of Lumpfish (Cyclopterus lumpus L.) Brain Towards Understanding their Social and Cognitive Behavioural Traits

Authors: Likith Reddy Pinninti, Fredrik Ribsskog Staven, Leslie Robert Noble, Jorge Manuel de Oliveira Fernandes, Deepti Manjari Patel, Torstein Kristensen

Abstract:

Understanding fish behavior is essential to improve animal welfare in aquaculture research. Behavioral traits can have a strong influence on fish health and habituation. To identify the genes and biological pathways responsible for lumpfish behavior, we performed an experiment to understand the interspecies relationship (mutualism) between the lumpfish and salmon. Also, we tested the correlation between the gene expression data vs. observational/physiological data to know the essential genes that trigger stress and swimming behavior in lumpfish. After the de novo assembly of the brain transcriptome, all the samples were individually mapped to the available lumpfish (Cyclopterus lumpus L.) primary genome assembly (fCycLum1.pri, GCF_009769545.1). Out of ~16749 genes expressed in brain samples, we found 267 genes to be statistically significant (P > 0.05) found only in odor and control (1), model and control (41) and salmon and control (225) groups. However, genes with |LogFC| ≥0.5 were found to be only eight; these are considered as differentially expressed genes (DEG’s). Though, we are unable to find the differential genes related to the behavioral traits from RNA-Seq data analysis. From the correlation analysis, between the gene expression data vs. observational/physiological data (serotonin (5HT), dopamine (DA), 3,4-Dihydroxyphenylacetic acid (DOPAC), 5-hydroxy indole acetic acid (5-HIAA), Noradrenaline (NORAD)). We found 2495 genes found to be significant (P > 0.05) and among these, 1587 genes are positively correlated with the Noradrenaline (NORAD) hormone group. This suggests that Noradrenaline is triggering the change in pigmentation and skin color in lumpfish. Genes related to behavioral traits like rhythmic, locomotory, feeding, visual, pigmentation, stress, response to other organisms, taxis, dopamine synthesis and other neurotransmitter synthesis-related genes were obtained from the correlation analysis. In KEGG pathway enrichment analysis, we find important pathways, like the calcium signaling pathway and adrenergic signaling in cardiomyocytes, both involved in cell signaling, behavior, emotion, and stress. Calcium is an essential signaling molecule in the brain cells; it could affect the behavior of fish. Our results suggest that changes in calcium homeostasis and adrenergic receptor binding activity lead to changes in fish behavior during stress.

Keywords: behavior, De novo, lumpfish, salmon

Procedia PDF Downloads 170
591 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 147
590 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 168
589 Compensation for Victims of Crime and Abuse of Power in Nigeria

Authors: Kolawole Oyekan Jamiu

Abstract:

In Nigerian criminal law, a victim of an offence plays little or no role in the prosecution of an offender. The state concentrates only on imposing punishment on the offender while the victims of crime and abuse of power by security agencies are abandoned without any compensation either from the State or the offender. It has been stated that the victim of crime is the forgotten man in our criminal justice system. He sets the criminal law in motion but then goes into oblivion. Our present criminal law does not recognise the right of the victim to take part in the prosecution of the case or his right to compensation. The victim is merely a witness in a state versus case. This paper examines the meaning of the phrase ‘the victims of crime and abuse of power’. It needs to be noted that there is no definition of these two categories of victims in any statute in Nigeria. The paper also considers the United Nations General Assembly Declaration of Basic Principle of Justice for Victims and abuse of power. This declaration was adopted by the United Nations General Assembly on the 25th of November 1985. The declaration contains copious provisions on compensation for the victims of crime and abuse of power. Unfortunately, the declaration is not, in itself a legally binding instrument and has been given little or no attention since the coming into effect in1985. This paper examines the role of the judiciary in ensuring that victims of crime and abuse of power in Nigeria are compensated. While some Judges found it difficult to award damages to victims of abuse of power others have given some landmark rulings and awarded substantial damages. The criminal justice ( victim’s remedies) Bill shall also be examined. The Bill comprises of 74 sections and it spelt out the procedures for compensating the victims of crime and abuse of power in Nigeria. Finally, the paper also examines the practicability of awarding damages to victims of crime whether the offender is convicted or not and in addition, the possibility of granting all equitable remedies available in civil cases to victims of crime and abuse of power so that the victims will be restored to the earlier position before the crime.

Keywords: compensation, damages, restitution, victims

Procedia PDF Downloads 720
588 Optimisation Model for Maximising Social Sustainability in Construction Scheduling

Authors: Laura Florez

Abstract:

The construction industry is labour intensive, and the behaviour and management of workers have a direct impact on the performance of construction projects. One of the issues it currently faces is how to recruit and maintain its workers. Construction is known as an industry where workers face the problem of short employment durations, frequent layoffs, and periods of unemployment between jobs. These challenges not only creates pressures on the workers but also project managers have to constantly train new workers, face skills shortage, and uncertainty on the quality of the workers it will attract. To consider worker’s needs and project managers expectations, one practice that can be implemented is to schedule construction projects to maintain a stable workforce. This paper proposes a mixed integer programming (MIP) model to schedule projects with the objective of maximising social sustainability of construction projects, that is, maximise labour stability. Aside from the social objective, the model accounts for equipment and financial resources required by the projects during the construction phase. To illustrate how the solution strategy works, a construction programme comprised of ten projects is considered. The projects are scheduled to maximise labour stability while simultaneously minimising time and minimising cost. The tradeoff between the values in terms of time, cost, and labour stability allows project managers to consider their preferences and identify which solution best suits their needs. Additionally, the model determines the optimal starting times for each of the projects, working patterns for the workers, and labour costs. This model shows that construction projects can be scheduled to not only benefit the project manager, but also benefit current workers and help attract new workers to the industry. Due to its practicality, it can be a valuable tool to support decision making and assist construction stakeholders when developing schedules that include social sustainability factors.

Keywords: labour stability, mixed-integer programming (MIP), scheduling, workforce management

Procedia PDF Downloads 251
587 Building Information Modeling Implementation for Managing an Extra Large Governmental Building Renovation Project

Authors: Pornpote Nusen, Manop Kaewmoracharoen

Abstract:

In recent years, there was an observable shift in fully developed countries from constructing new buildings to modifying existing buildings. The issue was that although an effective instrument like BIM (Building Information Modeling) was well developed for constructing new buildings, it was not widely used to renovate old buildings. BIM was accepted as an effective means to overcome common managerial problems such as project delay, cost overrun, and poor quality of the project life cycle. It was recently introduced in Thailand and rarely used in a renovation project. Today, in Thailand, BIM is mostly used for creating aesthetic 3D models and quantity takeoff purposes, though it can be an effective tool to use as a project management tool in planning and scheduling. Now the governmental sector in Thailand begins to recognize the uses of using BIM to manage a construction project, but the knowledge about the BIM implementation to governmental construction projects is underdeveloped. Further studies need to be conducted to maximize its advantages for the governmental sector. An educational extra large governmental building of 17,000 square-meters was used in this research. It is currently under construction for a two-year renovation project. BIM models of the building for the exterior and interior areas were created for the whole five floors. Then 4D BIM with combination of 3D BIM plus time was created for planning and scheduling. Three focus groups had been done with executive committee, contractors, and officers of the building to discuss the possibility of usage and usefulness of BIM approach over the traditional process. Several aspects were discussed in the positive sides, especially several foreseen problems, such as the inadequate accessibility of ways, the altered ceiling levels, the impractical construction plan created through a traditional approach, and the lack of constructability information. However, for some parties, the cost of BIM implementation was a concern, though, this study believes, its uses outweigh the cost.

Keywords: building information modeling, extra large building, governmental building renovation, project management, renovation, 4D BIM

Procedia PDF Downloads 144
586 Fluorescence Resonance Energy Transfer in a Supramolecular Assembly of Luminescent Silver Nanoclusters and Cucurbit[8]uril Based Host-Guest System

Authors: Srikrishna Pramanik, Sree Chithra, Saurabh Rai, Sameeksha Agrawal, Debanggana Shil, Saptarshi Mukherjee

Abstract:

The understanding of interactions between organic chromophores and biologically useful luminescent noble metal nanoclusters (NCs) leading to an energy transfer process that has applications in light-harvesting materials is still in its nascent stage. This work describes a photoluminescent supramolecular assembly, made in two stages, employing an energy transfer process between silver (Ag) NCs as the donor and a host-guest system as the acceptor that can find potential applications in diverse fields. Initially, we explored the host-guest chemistry between a cationic guest, Ethidium Bromide and the anionic host Cucurbit[8]uril using spectroscopic and calorimetric techniques to decipher their interaction mechanism in modulating photophysical properties of the chromophore. Next, we synthesized a series of blue-emitting AgNCs using different templates such as protein, peptides, and cyclodextrin. The as-prepared AgNCs were characterized by various spectroscopic techniques. We have established that these AgNCs can be employed as donors in the FRET process with the above acceptor for FRET-based emission color tuning. Our in-depth studies revealed that surface ligands play a key role in modulating FRET efficiency. Overall, by employing a non-covalent strategy, we have tried to develop FRET pairs using blue-emitting NCs and a host-guest complex, which could find potential applications in constructing advanced white light-emitting, anti-counterfeiting materials, and developing biosensors.

Keywords: absorption spectroscopy, cavities, energy transfer, fluorescence, fluorescence resonance energy transfer

Procedia PDF Downloads 37
585 AquaCrop Model Simulation for Water Productivity of Teff (Eragrostic tef): A Case Study in the Central Rift Valley of Ethiopia

Authors: Yenesew Mengiste Yihun, Abraham Mehari Haile, Teklu Erkossa, Bart Schultz

Abstract:

Teff (Eragrostic tef) is a staple food in Ethiopia. The local and international demand for the crop is ever increasing pushing the current price five times compared with that in 2006. To meet this escalating demand increasing production including using irrigation is imperative. Optimum application of irrigation water, especially in semi-arid areas is profoundly important. AquaCrop model application in irrigation water scheduling and simulation of water productivity helps both irrigation planners and agricultural water managers. This paper presents simulation and evaluation of AquaCrop model in optimizing the yield and biomass response to variation in timing and rate of irrigation water application. Canopy expansion, canopy senescence and harvest index are the key physiological processes sensitive to water stress. For full irrigation water application treatment there was a strong relationship between the measured and simulated canopy and biomass with r2 and d values of 0.87 and 0.96 for canopy and 0.97 and 0.74 for biomass, respectively. However, the model under estimated the simulated yield and biomass for higher water stress level. For treatment receiving full irrigation the harvest index value obtained were 29%. The harvest index value shows generally a decreasing trend under water stress condition. AquaCrop model calibration and validation using the dry season field experiments of 2010/2011 and 2011/2012 shows that AquaCrop adequately simulated the yield response to different irrigation water scenarios. We conclude that the AquaCrop model can be used in irrigation water scheduling and optimizing water productivity of Teff grown under water scarce semi-arid conditions.

Keywords: AquaCrop, climate smart agriculture, simulation, teff, water security, water stress regions

Procedia PDF Downloads 398
584 Mathematical Model and Algorithm for the Berth and Yard Resource Allocation at Seaports

Authors: Ming Liu, Zhihui Sun, Xiaoning Zhang

Abstract:

This paper studies a deterministic container transportation problem, jointly optimizing the berth allocation, quay crane assignment and yard storage allocation at container ports. The problem is formulated as an integer program to coordinate the decisions. Because of the large scale, it is then transformed into a set partitioning formulation, and a framework of branchand- price algorithm is provided to solve it.

Keywords: branch-and-price, container terminal, joint scheduling, maritime logistics

Procedia PDF Downloads 290
583 Modelling, Assessment, and Optimisation of Rules for Selected Umgeni Water Distribution Systems

Authors: Khanyisile Mnguni, Muthukrishnavellaisamy Kumarasamy, Jeff C. Smithers

Abstract:

Umgeni Water is a water board that supplies most parts of KwaZulu Natal with bulk portable water. Currently, Umgeni Water is running its distribution system based on required reservoir levels and demands and does not consider the energy cost at different times of the day, number of pump switches, and background leakages. Including these constraints can reduce operational cost, energy usage, leakages, and increase performance. Optimising pump schedules can reduce energy usage and costs while adhering to hydraulic and operational constraints. Umgeni Water has installed an online hydraulic software, WaterNet Advisor, that allows running different operational scenarios prior to implementation in order to optimise the distribution system. This study will investigate operation scenarios using optimisation techniques and WaterNet Advisor for a local water distribution system. Based on studies reported in the literature, introducing pump scheduling optimisation can reduce energy usage by approximately 30% without any change in infrastructure. Including tariff structures in an optimisation problem can reduce pumping costs by 15%, while including leakages decreases cost by 10%, and pressure drop in the system can be up to 12 m. Genetical optimisation algorithms are widely used due to their ability to solve nonlinear, non-convex, and mixed-integer problems. Other methods such as branch and bound linear programming have also been successfully used. A suitable optimisation method will be chosen based on its efficiency. The objective of the study is to reduce energy usage, operational cost, and leakages, and the feasibility of optimal solution will be checked using the Waternet Advisor. This study will provide an overview of the optimisation of hydraulic networks and progress made to date in multi-objective optimisation for a selected sub-system operated by Umgeni Water.

Keywords: energy usage, pump scheduling, WaterNet Advisor, leakages

Procedia PDF Downloads 90
582 Self-Assembly of TaC@Ta Core-Shell-Like Nanocomposite Film via Solid-State Dewetting: Toward Superior Wear and Corrosion Resistance

Authors: Ping Ren, Mao Wen, Kan Zhang, Weitao Zheng

Abstract:

The improvement of comprehensive properties including hardness, toughness, wear, and corrosion resistance in the transition metal carbides/nitrides TMCN films, especially avoiding the trade-off between hardness and toughness, is strongly required to adapt to various applications. Although incorporating ductile metal DM phase into the TMCN via thermally-induced phase separation has been emerged as an effective approach to toughen TMCN-based films, the DM is just limited to some soft ductile metal (i.e. Cu, Ag, Au immiscibility with the TMCN. Moreover, hardness is highly sensitive to soft DM content and can be significantly worsened. Hence, a novel preparation method should be attempted to broaden the DM selection and assemble much more ordered nanocomposite structure for improving the comprehensive properties. Here, we provide a new strategy, by activating solid-state dewetting during layered deposition, to accomplish the self-assembly of ordered TaC@Ta core-shell-like nanocomposite film consisting of TaC nanocrystalline encapsulated with thin pseudocrystal Ta tissue. That results in the superhard (~45.1 GPa) dominated by Orowan strengthening mechanism and high toughness attributed to indenter-induced phase transformation from the pseudocrystal to body-centered cubic Ta, together with the drastically enhanced wear and corrosion resistance. Furthermore, very thin pseudocrystal Ta encapsulated layer (~1.5 nm) in the TaC@Ta core-shell-like structure helps for promoting the formation of lubricious TaOₓ Magnéli phase during sliding, thereby further dropping the coefficient of friction. Apparently, solid-state dewetting may provide a new route to construct ordered TMC(N)@TM core-shell-like nanocomposite capable of combining superhard, high toughness, low friction, superior wear with corrosion resistance.

Keywords: corrosion, nanocomposite film, solid-state dewetting, tribology

Procedia PDF Downloads 132
581 Study of University Course Scheduling for Crowd Gathering Risk Prevention and Control in the Context of Routine Epidemic Prevention

Authors: Yuzhen Hu, Sirui Wang

Abstract:

As a training base for intellectual talents, universities have a large number of students. Teaching is a primary activity in universities, and during the teaching process, a large number of people gather both inside and outside the teaching buildings, posing a strong risk of close contact. The class schedule is the fundamental basis for teaching activities in universities and plays a crucial role in the management of teaching order. Different class schedules can lead to varying degrees of indoor gatherings and trajectories of class attendees. In recent years, highly contagious diseases have frequently occurred worldwide, and how to reduce the risk of infection has always been a hot issue related to public safety. "Reducing gatherings" is one of the core measures in epidemic prevention and control, and it can be controlled through scientific scheduling in specific environments. Therefore, the scientific prevention and control goal can be achieved by considering the reduction of the risk of excessive gathering of people during the course schedule arrangement. Firstly, we address the issue of personnel gathering in various pathways on campus, with the goal of minimizing congestion and maximizing teaching effectiveness, establishing a nonlinear mathematical model. Next, we design an improved genetic algorithm, incorporating real-time evacuation operations based on tracking search and multidimensional positive gradient cross-mutation operations, considering the characteristics of outdoor crowd evacuation. Finally, we apply undergraduate course data from a university in Harbin to conduct a case study. It compares and analyzes the effects of algorithm improvement and optimization of gathering situations and explores the impact of path blocking on the degree of gathering of individuals on other pathways.

Keywords: the university timetabling problem, risk prevention, genetic algorithm, risk control

Procedia PDF Downloads 83
580 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 38
579 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 204
578 Laboratory-Based Monitoring of Hepatitis B Virus Vaccination Status in North Central Nigeria

Authors: Nwadioha Samuel Iheanacho, Abah Paul, Odimayo Simidele Michael

Abstract:

Background: The World Health Assembly through the Global Health Sector Strategy on viral hepatitis calls for the elimination of viral hepatitis as a public health threat by 2030. All hands are on deck to actualize this goal through an effective and active vaccination and monitoring tool. Aim: To combine the Epidemiologic with Laboratory Hepatitis B Virus vaccination monitoring tools. Method: Laboratory results analysis of subjects recruited during the World Hepatitis week from July 2020 to July 2021 was done after obtaining their epidemiologic data on Hepatitis B virus risk factors, in the Medical Microbiology Laboratory of Benue State University Teaching Hospital, Nigeria. Result: A total of 500 subjects comprising males 60.0%(n=300/500) and females 40.0%(n=200/500) were recruited. A fifty-three percent majority was of the age range of 26 to 36 years. Serologic profiles were as follows, 15.0%(n=75/500) HBsAg; 7.0% (n=35/500) HBeAg; 8.0% (n=40/500) Anti-Hbe; 20.0% (n=100/500) Anti-HBc and 38.0% (n=190/500) Anti-HBs. Immune responses to vaccination were as follows, 47.0%(n=235/500) Immune naïve {no serologic marker + normal ALT}; 33%(n=165/500) Immunity by vaccination {Anti-HBs + normal ALT}; 5%(n=25/500) Immunity to previous infection {Anti-HBs, Anti-HBc, +/- Anti-HBe + normal ALT}; 8%(n=40/500) Carriers {HBsAg, Anti-HBc, Anti-HBe +normal ALT} and 7% (35/500) Anti-HBe serum- negative infections {HBsAg, HBeAg, Anti-HBc +elevated ALT}. Conclusion: The present 33.0% immunity by vaccination coverage in Central Nigeria was much lower than the 41.0% national peak in 2013, and a far cry from the global expectation of attainment of a Global Health Sector Strategy on the elimination of viral hepatitis as a public health threat by 2030. Therefore, more creative ideas and collective effort are needed to attain this goal of the World Health Assembly.

Keywords: Hepatitis B, vaccination status, laboratory tools, resource-limited settings

Procedia PDF Downloads 71
577 Creation of S-Box in Blowfish Using AES

Authors: C. Rekha, G. N. Krishnamurthy

Abstract:

This paper attempts to develop a different approach for key scheduling algorithm which uses both Blowfish and AES algorithms. The main drawback of Blowfish algorithm is, it takes more time to create the S-box entries. To overcome this, we are replacing process of S-box creation in blowfish, by using key dependent S-box creation from AES without affecting the basic operation of blowfish. The method proposed in this paper uses good features of blowfish as well as AES and also this paper demonstrates the performance of blowfish and new algorithm by considering different aspects of security namely Encryption Quality, Key Sensitivity, and Correlation of horizontally adjacent pixels in an encrypted image.

Keywords: AES, blowfish, correlation coefficient, encryption quality, key sensitivity, s-box

Procedia PDF Downloads 222
576 Field Management Solutions Supporting Foreman Executive Tasks

Authors: Maroua Sbiti, Karim Beddiar, Djaoued Beladjine, Romuald Perrault

Abstract:

Productivity is decreasing in construction compared to the manufacturing industry. It seems that the sector is suffering from organizational problems and have low maturity regarding technological advances. High international competition due to the growing context of globalization, complex projects, and shorter deadlines increases these challenges. Field employees are more exposed to coordination problems than design officers. Execution collaboration is then a major issue that can threaten the cost, time, and quality completion of a project. Initially, this paper will try to identify field professional requirements as to address building management process weaknesses such as the unreliability of scheduling, the fickleness of monitoring and inspection processes, the inaccuracy of project’s indicators, inconsistency of building documents and the random logistic management. Subsequently, we will focus our attention on providing solutions to improve scheduling, inspection, and hours tracking processes using emerging lean tools and field mobility applications that bring new perspectives in terms of cooperation. They have shown a great ability to connect various field teams and make informations visual and accessible to planify accurately and eliminate at the source the potential defects. In addition to software as a service use, the adoption of the human resource module of the Enterprise Resource Planning system can allow a meticulous time accounting and thus make the faster decision making. The next step is to integrate external data sources received from or destined to design engineers, logisticians, and suppliers in a holistic system. Creating a monolithic system that consolidates planning, quality, procurement, and resources management modules should be our ultimate target to build the construction industry supply chain.

Keywords: lean, last planner system, field mobility applications, construction productivity

Procedia PDF Downloads 113
575 Thermal Hydraulic Analysis of Sub-Channels of Pressurized Water Reactors with Hexagonal Array: A Numerical Approach

Authors: Md. Asif Ullah, M. A. R. Sarkar

Abstract:

This paper illustrates 2-D and 3-D simulations of sub-channels of a Pressurized Water Reactor (PWR) having hexagonal array of fuel rods. At a steady state, the temperature of outer surface of the cladding of fuel rod is kept about 1200°C. The temperature of this isothermal surface is taken as boundary condition for simulation. Water with temperature of 290°C is given as a coolant inlet to the primary water circuit which is pressurized upto 157 bar. Turbulent flow of pressurized water is used for heat removal. In 2-D model, temperature, velocity, pressure and Nusselt number distributions are simulated in a vertical sectional plane through the sub-channels of a hexagonal fuel rod assembly. Temperature, Nusselt number and Y-component of convective heat flux along a line in this plane near the end of fuel rods are plotted for different Reynold’s number. A comparison between X-component and Y-component of convective heat flux in this vertical plane is analyzed. Hexagonal fuel rod assembly has three types of sub-channels according to geometrical shape whose boundary conditions are different too. In 3-D model, temperature, velocity, pressure, Nusselt number, total heat flux magnitude distributions for all the three sub-channels are studied for a suitable Reynold’s number. A horizontal sectional plane is taken from each of the three sub-channels to study temperature, velocity, pressure, Nusselt number and convective heat flux distribution in it. Greater values of temperature, Nusselt number and Y-component of convective heat flux are found for greater Reynold’s number. X-component of convective heat flux is found to be non-zero near the bottom of fuel rod and zero near the end of fuel rod. This indicates that the convective heat transfer occurs totally along the direction of flow near the outlet. As, length to radius ratio of sub-channels is very high, simulation for a short length of the sub-channels are done for graphical interface advantage. For the simulations, Turbulent Flow (K-Є ) module and Heat Transfer in Fluids (ht) module of COMSOL MULTIPHYSICS 5.0 are used.

Keywords: sub-channels, Reynold’s number, Nusselt number, convective heat transfer

Procedia PDF Downloads 357
574 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver

Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar

Abstract:

Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.

Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy

Procedia PDF Downloads 192