Search results for: simulation of metal spinning
536 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors
Authors: Susana Aragoneses Garrido
Abstract:
Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.Keywords: analysis vehicles, asssesment, ergonomics, car redesign
Procedia PDF Downloads 336535 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area
Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma
Abstract:
The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty
Procedia PDF Downloads 92534 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone
Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg
Abstract:
Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.Keywords: energy simulation, office building, tropical climate, zero energy buildings
Procedia PDF Downloads 185533 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad
Abstract:
The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1
Procedia PDF Downloads 93532 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing
Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev
Abstract:
The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect
Procedia PDF Downloads 132531 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams
Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann
Abstract:
Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation
Procedia PDF Downloads 240530 Designed Purine Molecules and in-silico Evaluation of Aurora Kinase Inhibition in Breast Cancer
Authors: Pooja Kumari, Anandkumar Tengli
Abstract:
Aurora kinase enzyme, a protein on overexpression, leads to metastasis and is extremely important for women’s health in terms of prevention or treatment. While creating a targeted technique, the aim of the work is to design purine molecules that inhibit in aurora kinase enzyme and helps to suppress breast cancer. Purine molecules attached to an amino acid in DNA block protein synthesis or halt the replication and metastasis caused by the aurora kinase enzyme. Various protein related to the overexpression of aurora protein was docked with purine molecule using Biovia Drug Discovery, the perpetual software. Various parameters like X-ray crystallographic structure, presence of ligand, Ramachandran plot, resolution, etc., were taken into consideration for selecting the target protein. A higher negative binding scored molecule has been taken for simulation studies. According to the available research and computational analyses, purine compounds may be powerful enough to demonstrate a greater affinity for the aurora target. Despite being clinically effective now, purines were originally meant to fight breast cancer by inhibiting the aurora kinase enzyme. In in-silico studies, it is observed that purine compounds have a moderate to high potency compared to other molecules, and our research into the literature revealed that purine molecules have a lower risk of side effects. The research involves the design, synthesis, and identification of active purine molecules against breast cancer. Purines are structurally similar to the normal metabolites of adenine and guanine; hence interfere/compete with protein synthesis and suppress the abnormal proliferation of cells/tissues. As a result, purine target metastasis cells and stop the growth of kinase; purine derivatives bind with DNA and aurora protein which may stop the growth of protein or inhibits replication and stop metastasis of overexpressed aurora kinase enzyme.Keywords: aurora kinases, in silico studies, medicinal chemistry, combination therapies, chronic cancer, clinical translation
Procedia PDF Downloads 86529 Renovate to nZEB of an Existing Building in the Mediterranean Area: Analysis of the Use of Renewable Energy Sources for the HVAC System
Authors: M. Baratieri, M. Beccali, S. Corradino, B. Di Pietra, C. La Grassa, F. Monteleone, G. Morosinotto, G. Puglisi
Abstract:
The energy renovation of existing buildings represents an important opportunity to increase the decarbonization and the sustainability of urban environments. In this context, the work carried out has the objective of demonstrating the technical and economic feasibility of an energy renovate of a public building destined for offices located on the island of Lampedusa in the Mediterranean Sea. By applying the Italian transpositions of European Directives 2010/31/EU and 2009/28/EC, the building has been renovated from the current energy requirements of 111.7 kWh/m² to 16.4 kWh/m². The result achieved classifies the building as nZEB (nearly Zero Energy Building) according to the Italian national definition. The analysis was carried out using in parallel a quasi-stationary software, normally used in the professional field, and a dynamic simulation model often used in the academic world. The proposed interventions cover the components of the building’s envelope, the heating-cooling system and the supply of energy from renewable sources. In these latter points, the analysis has focused more on assessing two aspects that affect the supply of renewable energy. The first concerns the use of advanced logic control systems for air conditioning units in order to increase photovoltaic self-consumption. With these adjustments, a considerable increase in photovoltaic self-consumption and a decrease in the electricity exported to the Island's electricity grid have been obtained. The second point concerned the evaluation of the building's energy classification considering the real efficiency of the heating-cooling plant. Normally the energy plants have lower operational efficiency than the designed one due to multiple reasons; the decrease in the energy classification of the building for this factor has been quantified. This study represents an important example for the evaluation of the best interventions for the energy renovation of buildings in the Mediterranean Climate and a good description of the correct methodology to evaluate the resulting improvements.Keywords: heat pumps, HVAC systems, nZEB renovation, renewable energy sources
Procedia PDF Downloads 453528 A Molecular Dynamic Simulation Study to Explore Role of Chain Length in Predicting Useful Characteristic Properties of Commodity and Engineering Polymers
Authors: Lokesh Soni, Sushanta Kumar Sethi, Gaurav Manik
Abstract:
This work attempts to use molecular simulations to create equilibrated structures of a range of commercially used polymers. Generated equilibrated structures for polyvinyl acetate (isotactic), polyvinyl alcohol (atactic), polystyrene, polyethylene, polyamide 66, poly dimethyl siloxane, poly carbonate, poly ethylene oxide, poly amide 12, natural rubber, poly urethane, and polycarbonate (bisphenol-A) and poly ethylene terephthalate are employed to estimate the correct chain length that will correctly predict the chain parameters and properties. Further, the equilibrated structures are used to predict some properties like density, solubility parameter, cohesive energy density, surface energy, and Flory-Huggins interaction parameter. The simulated densities for polyvinyl acetate, polyvinyl alcohol, polystyrene, polypropylene, and polycarbonate are 1.15 g/cm3, 1.125 g/cm3, 1.02 g/cm3, 0.84 g/cm3 and 1.223 g/cm3 respectively are found to be in good agreement with the available literature estimates. However, the critical repeating units or the degree of polymerization after which the solubility parameter showed saturation were 15, 20, 25, 10 and 20 respectively. This also indicates that such properties that dictate the miscibility of two or more polymers in their blends are strongly dependent on the chosen polymer or its characteristic properties. An attempt has been made to correlate such properties with polymer properties like Kuhn length, free volume and the energy term which plays a vital role in predicting the mentioned properties. These results help us to screen and propose a useful library which may be used by the research groups in estimating the polymer properties using the molecular simulations of chains with the predicted critical lengths. The library shall help to obviate the need for researchers to spend efforts in finding the critical chain length needed for simulating the mentioned polymer properties.Keywords: Kuhn length, Flory Huggins interaction parameter, cohesive energy density, free volume
Procedia PDF Downloads 195527 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator
Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas
Abstract:
The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm
Procedia PDF Downloads 96526 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 461525 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 246524 Preliminary Composite Overwrapped Pressure Vessel Design for Hydrogen Storage Using Netting Analysis and American Society of Mechanical Engineers Section X
Authors: Natasha Botha, Gary Corderely, Helen M. Inglis
Abstract:
With the move to cleaner energy applications the transport industry is working towards on-board hydrogen, or compressed natural gas-fuelled vehicles. A popular method for storage is to use composite overwrapped pressure vessels (COPV) because of their high strength to weight ratios. The proper design of these COPVs are according to international standards; this study aims to provide a preliminary design for a 350 Bar Type IV COPV (i.e. a polymer liner with a composite overwrap). Netting analysis, a popular analytical approach, is used as a first step to generate an initial design concept for the composite winding. This design is further improved upon by following the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel standards, Section X: Fibre-reinforced composite pressure vessels. A design program based on these two approaches is developed using Python. A numerical model of a burst test simulation is developed based on the two approaches and compared. The results indicate that the netting analysis provides a good preliminary design, while the ASME-based design is more robust and accurate as it includes a better approximation of the material behaviour. Netting analysis is an easy method to follow when considering an initial concept design for the composite winding when not all the material characteristics are known. Once these characteristics have been fully defined with experimental testing, an ASME-based design should always be followed to ensure that all designs conform to international standards and practices. Future work entails more detailed numerical testing of the design for improvement, this will include the boss design. Once finalised prototype manufacturing and experimental testing will be conducted, and the results used to improve on the COPV design.Keywords: composite overwrapped pressure vessel, netting analysis, design, American Society of Mechanical Engineers section x, fiber-reinforced, hydrogen storage
Procedia PDF Downloads 247523 Real-Time Kinetic Analysis of Labor-Intensive Repetitive Tasks Using Depth-Sensing Camera
Authors: Sudip Subedi, Nipesh Pradhananga
Abstract:
The musculoskeletal disorders, also known as MSDs, are common in construction workers. MSDs include lower back injuries, knee injuries, spinal injuries, and joint injuries, among others. Since most construction tasks are still manual, construction workers often need to perform repetitive, labor-intensive tasks. And they need to stay in the same or an awkward posture for an extended time while performing such tasks. It induces significant stress to the joints and spines, increasing the risk of getting into MSDs. Manual monitoring of such tasks is virtually impossible with the handful of safety managers in a construction site. This paper proposes a methodology for performing kinetic analysis of the working postures while performing such tasks in real-time. Skeletal of different workers will be tracked using a depth-sensing camera while performing the task to create training data for identifying the best posture. For this, the kinetic analysis will be performed using a human musculoskeletal model in an open-source software system (OpenSim) to visualize the stress induced by essential joints. The “safe posture” inducing lowest stress on essential joints will be computed for different actions involved in the task. The identified “safe posture” will serve as a basis for real-time monitoring and identification of awkward and unsafe postural behaviors of construction workers. Besides, the temporal simulation will be carried out to find the associated long-term effect of repetitive exposure to such observed postures. This will help to create awareness in workers about potential future health hazards and encourage them to work safely. Furthermore, the collected individual data can then be used to provide need-based personalized training to the construction workers.Keywords: construction workers’ safety, depth sensing camera, human body kinetics, musculoskeletal disorders, real time monitoring, repetitive labor-intensive tasks
Procedia PDF Downloads 132522 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics
Authors: Julia Zimmerman, Gaurav Savant
Abstract:
This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling
Procedia PDF Downloads 199521 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices
Authors: P. Caimmi, E. Bele, A. Abolfathi
Abstract:
Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis
Procedia PDF Downloads 122520 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 331519 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border
Authors: Fengqing Li, Petra Schneider
Abstract:
Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict
Procedia PDF Downloads 119518 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 246517 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design
Authors: Ling Liyun
Abstract:
In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction
Procedia PDF Downloads 136516 Analysis of the Contribution of Coastal and Marine Physical Factors to Oil Slick Movement: Case Study of Misrata, Libya
Authors: Abduladim Maitieg, Mark Johnson
Abstract:
Developing a coastal oil spill management plan for the Misratah coast is the motivating factor for building a database for coastal and marine systems and energy resources. Wind direction and speed, currents, bathymetry, coastal topography and offshore dynamics influence oil spill deposition in coastal water. Therefore, oceanographic and climatological data can be used to understand oil slick movement and potential oil deposits on shoreline area and the behaviour of oil spill trajectories on the sea surface. The purpose of this study is to investigate the effects of the coastal and marine physical factors under strong wave conditions and various bathymetric and coastal topography gradients in the western coastal area of Libya on the movement of oil slicks. The movement of oil slicks was computed using a GNOME simulation model based on current and wind speed/direction. The results in this paper show that (1) Oil slick might reach the Misratah shoreline area in two days in the summer and winter. Seasons. (2 ) The North coast of Misratah is the potential oil deposit area on the Misratah coast. (3) Tarball pollution was observed along the North coast of Misratah. (4) Two scenarios for the summer and the winter season were run, along the western coast of Libya . (5) The eastern coast is at a lower potential risk due to the influence of wind and current energy in the Gulf of Sidra. (6) The Misratah coastline is more vulnerable to oil spill movement in the summer than in winter seasons. (7) Oil slick takes from 2 to 5 days to reach the saltmarsh in the eastern Misratah coast. (8) Oil slick moves 300 km in 30 days from the spill resource location near the Libyan western border to the Misratah coast.(9) Bathymetric features have a profound effect on oil spill movement. (9)Oil dispersion simulations using GNOME are carried out taking into account high-resolution wind and current data.Keywords: oil spill movement, coastal and marine physical factors, coast area, Libyan
Procedia PDF Downloads 228515 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril
Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro
Abstract:
Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics
Procedia PDF Downloads 92514 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia
Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay
Abstract:
Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.Keywords: AquaCrop model, calibration, validation, simulation
Procedia PDF Downloads 73513 Desulfurization of Crude Oil Using Bacteria
Authors: Namratha Pai, K. Vasantharaj, K. Haribabu
Abstract:
Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil
Procedia PDF Downloads 320512 Carbon Sequestration Modeling in the Implementation of REDD+ Programmes in Nigeria
Authors: Oluwafemi Samuel Oyamakin
Abstract:
The forest in Nigeria is currently estimated to extend to around 9.6 million hectares, but used to expand over central and southern Nigeria decades ago. The forest estate is shrinking due to long-term human exploitation for agricultural development, fuel wood demand, uncontrolled forest harvesting and urbanization, amongst other factors, compounded by population growth in rural areas. Nigeria has lost more than 50% of its forest cover since 1990 and currently less than 10% of the country is forested. The current deforestation rate is estimated at 3.7%, which is one of the highest in the world. Reducing Emissions from Deforestation and forest Degradation plus conservation, sustainable management of forests and enhancement of forest carbon stocks constituted what is referred to as REDD+. This study evaluated some of the existing way of computing carbon stocks using eight indigenous tree species like Mansonia, Shorea, Bombax, Terminalia superba, Khaya grandifolia, Khaya senegalenses, Pines and Gmelina arborea. While these components are the essential elements of REDD+ programme, they can be brought under a broader framework of systems analysis designed to arrive at optimal solutions for future predictions through statistical distribution pattern of carbon sequestrated by various species of tree. Available data on height and diameter of trees in Ibadan were studied and their respective potentials of carbon sequestration level were assessed and subjected to tests so as to determine the best statistical distribution that would describe the carbon sequestration pattern of trees. The result of this study suggests a reasonable statistical distribution for carbons sequestered in simulation studies and hence, allow planners and government in determining resources forecast for sustainable development especially where experiments with real-life systems are infeasible. Sustainable management of forest can then be achieved by projecting future condition of forests under different management regimes thereby supporting conservation and REDD+ programmes in Nigeria.Keywords: REDD+, carbon, climate change, height and diameter
Procedia PDF Downloads 168511 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 152510 Application of Alumina-Aerogel in Post-Combustion CO₂ Capture: Optimization by Response Surface Methodology
Authors: S. Toufigh Bararpour, Davood Karami, Nader Mahinpey
Abstract:
Dependence of global economics on fossil fuels has led to a large growth in the emission of greenhouse gases (GHGs). Among the various GHGs, carbon dioxide is the main contributor to the greenhouse effect due to its huge emission amount. To mitigate the threatening effect of CO₂, carbon capture and sequestration (CCS) technologies have been studied widely in recent years. For the combustion processes, three main CO₂ capture techniques have been proposed such as post-combustion, pre-combustion and oxyfuel combustion. Post-combustion is the most commonly used CO₂ capture process as it can be readily retrofit into the existing power plants. Multiple advantages have been reported for the post-combustion by solid sorbents such as high CO₂ selectivity, high adsorption capacity, and low required regeneration energy. Chemical adsorption of CO₂ over alkali-metal-based solid sorbents such as K₂CO₃ is a promising method for the selective capture of diluted CO₂ from the huge amount of nitrogen existing in the flue gas. To improve the CO₂ capture performance, K₂CO₃ is supported by a stable and porous material. Al₂O₃ has been employed commonly as the support and enhanced the cyclic CO₂ capture efficiency of K₂CO₃. Different phases of alumina can be obtained by setting the calcination temperature of boehmite at 300, 600 (γ-alumina), 950 (δ-alumina) and 1200 °C (α-alumina). By increasing the calcination temperature, the regeneration capacity of alumina increases, while the surface area reduces. However, sorbents with lower surface areas have lower CO₂ capture capacity as well (except for the sorbents prepared by hydrophilic support materials). To resolve this issue, a highly efficient alumina-aerogel support was synthesized with a BET surface area of over 2000 m²/g and then calcined at a high temperature. The synthesized alumina-aerogel was impregnated on K₂CO₃ based on 50 wt% support/K₂CO₃, which resulted in the preparation of a sorbent with remarkable CO₂ capture performance. The effect of synthesis conditions such as types of alcohols, solvent-to-co-solvent ratios, and aging times was investigated on the performance of the support. The best support was synthesized using methanol as the solvent, after five days of aging time, and at a solvent-to-co-solvent (methanol-to-toluene) ratio (v/v) of 1/5. Response surface methodology was used to investigate the effect of operating parameters such as carbonation temperature and H₂O-to-CO₂ flowrate ratio on the CO₂ capture capacity. The maximum CO₂ capture capacity, at the optimum amounts of operating parameters, was 7.2 mmol CO₂ per gram K₂CO₃. Cyclic behavior of the sorbent was examined over 20 carbonation and regenerations cycles. The alumina-aerogel-supported K₂CO₃ showed a great performance compared to unsupported K₂CO₃ and γ-alumina-supported K₂CO₃. Fundamental performance analyses and long-term thermal and chemical stability test will be performed on the sorbent in the future. The applicability of the sorbent for a bench-scale process will be evaluated, and a corresponding process model will be established. The fundamental material knowledge and respective process development will be delivered to industrial partners for the design of a pilot-scale testing unit, thereby facilitating the industrial application of alumina-aerogel.Keywords: alumina-aerogel, CO₂ capture, K₂CO₃, optimization
Procedia PDF Downloads 116509 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 203508 Design and Development of an Innovative MR Damper Based on Intelligent Active Suspension Control of a Malaysia's Model Vehicle
Authors: L. Wei Sheng, M. T. Noor Syazwanee, C. J. Carolyna, M. Amiruddin, M. Pauziah
Abstract:
This paper exhibits the alternatives towards active suspension systems revised based on the classical passive suspension system to improve comfort and handling performance. An active Magneto rheological (MR) suspension system is proposed as to explore the active based suspension system to enhance performance given its freedom to independently specify the characteristics of load carrying, handling, and ride quality. Malaysian quarter car with two degrees of freedom (2DOF) system is designed and constructed to simulate the actions of an active vehicle suspension system. The structure of a conventional twin-tube shock absorber is modified both internally and externally to comprehend with the active suspension system. The shock absorber peripheral structure is altered to enable the assembling and disassembling of the damper through a non-permanent joint whereby the stress analysis of the designed joint is simulated using Finite Element Analysis. Simulation on the internal part where an electrified copper coil of 24AWG is winded is done using Finite Element Method Magnetics to measure the magnetic flux density inside the MR damper. The primary purpose of this approach is to reduce the vibration transmitted from the effects of road surface irregularities while maintaining solid manoeuvrability. The aim of this research is to develop an intelligent control system of a consecutive damping automotive suspension system. The ride quality is improved by means of the reduction of the vertical body acceleration caused by the car body when it experiences disturbances from speed bump and random road roughness. Findings from this research are expected to enhance the quality of ride which in return can prevent the deteriorating effect of vibration on the vehicle condition as well as the passengers’ well-being.Keywords: active suspension, FEA, magneto rheological damper, Malaysian quarter car model, vibration control
Procedia PDF Downloads 210507 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty
Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)
Abstract:
In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator
Procedia PDF Downloads 89