Search results for: building energy simulation
1303 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 3021302 Bio-Mimetic Foam Fractionation Technology for the Treatment of Per- and PolyFluoroAlkyl Substances (PFAS) in Contaminated Water
Authors: Hugo Carronnier, Wassim Almouallem, Eric Branquet
Abstract:
Per- and polyfluoroalkyl Substances (PFAS) are a group of man-made refractory compounds that have been widely used in a variety of industrial and commercial products since the 1940s, leading to contamination of groundwater and surface water systems. They are persistent, bioaccumulative and toxic chemicals. Foam fractionation is a potential remedial technique for treating PFAS-contaminated water, taking advantage of the high surface activity to remove them from the solution by adsorption onto the surface of the air bubbles. Nevertheless, traditional foam fractionation technology developed for PFAS is challenging and found to be ineffective in treating the less surface-active compounds. Different chemicals were the subject of investigation as amendments to achieve better removal. However, most amendments are toxic, expensive and complicated to use. In this situation, patent-pending PFAS technology overcomes these challenges by using rather biological amendments. Results from the first laboratory trial showed remarkable results using a simple and cheap BioFoam Fractionation (BioFF) process based on biomimetics. The study showed that the BioFF process is effective in removing greater than 99% of PFOA (C8), PFOS (C8), PFHpS (C7) and PFHxS (C6) in PFAS-contaminated water. For other PFAS such as PFDA (C10) and 6:2 FTAB, a slightly less stable removal between 94% and 96% was achieved while between 34% and 73% removal efficiency was observed for PFBA (C4), PFBS (C4), PFHxA (C6), and Gen-X. In sum, the advantages of the BioFF presented as a low-waste production, a cost and energy-efficient operation and the use of a biodegradable amendment requiring no separation step after treatment, coupled with these first findings, suggest that the BioFF process is a highly applicable treatment technology for PFAS contaminated water. Additional investigations are currently carried on in order to optimize the process and establish a promising strategy for on-site PFAS remediation.Keywords: PFAS, treatment, foam fractionation, contaminated amendments
Procedia PDF Downloads 781301 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 1731300 Oxidation and Reduction Kinetics of Ni-Based Oxygen Carrier for Chemical Looping Combustion
Authors: J. H. Park, R. H. Hwang, K. B. Yi
Abstract:
Carbon Capture and Storage (CCS) is one of the important technology to reduce the CO₂ emission from large stationary sources such as a power plant. Among the carbon technologies for power plants, chemical looping combustion (CLC) has attracted much attention due to a higher thermal efficiency and a lower cost of electricity. A CLC process is consists of a fuel reactor and an air reactor which are interconnected fluidized bed reactor. In the fuel reactor, an oxygen carrier (OC) is reduced by fuel gas such as CH₄, H₂, CO. And the OC is send to air reactor and oxidized by air or O₂ gas. The oxidation and reduction reaction of OC occurs between the two reactors repeatedly. In the CLC system, high concentration of CO₂ can be easily obtained by steam condensation only from the fuel reactor. It is very important to understand the oxidation and reduction characteristics of oxygen carrier in the CLC system to determine the solids circulation rate between the air and fuel reactors, and the amount of solid bed materials. In this study, we have conducted the experiment and interpreted oxidation and reduction reaction characteristics via observing weight change of Ni-based oxygen carrier using the TGA with varying as concentration and temperature. Characterizations of the oxygen carrier were carried out with BET, SEM. The reaction rate increased with increasing the temperature and increasing the inlet gas concentration. We also compared experimental results and adapted basic reaction kinetic model (JMA model). JAM model is one of the nucleation and nuclei growth models, and this model can explain the delay time at the early part of reaction. As a result, the model data and experimental data agree over the arranged conversion and time with overall variance (R²) greater than 98%. Also, we calculated activation energy, pre-exponential factor, and reaction order through the Arrhenius plot and compared with previous Ni-based oxygen carriers.Keywords: chemical looping combustion, kinetic, nickel-based, oxygen carrier, spray drying method
Procedia PDF Downloads 2091299 Human Behavioral Assessment to Derive Land-Use for Sustenance of River in India
Authors: Juhi Sah
Abstract:
Habitat is characterized by the inter-dependency of environmental elements. Anthropocentric development approach is increasing our vulnerability towards natural hazards. Hence, manmade interventions should have a higher level of sensitivity towards the natural settings. Sensitivity towards the environment can be assessed by the behavior of the stakeholders involved. This led to the establishment of a hypothesis: there exists a legitimate relationship between the behavioral sciences, land use evolution and environment conservation, in the planning process. An attempt has been made to establish this relationship by reviewing the existing set of knowledge and case examples pertaining to the three disciplines under inquiry. Understanding the scarce & deteriorating nature of fresh-water reserves of earth and experimenting the above concept, a case study of a growing urban center's river flood plain is selected, in a developing economy, India. Cases of urban flooding in Chennai, Delhi and other mega cities of India, imposes a high risk on the unauthorized settlement, on the floodplains of the rivers. The issue addressed here is the encroachment of floodplains, through psychological enlightenment and modification through knowledge building. The reaction of an individual or society can be compared to a cognitive process. This study documents all the stakeholders' behavior and perception for their immediate natural environment (water body), and produce various land uses suitable along a river in an urban settlement as per different stakeholder's perceptions. To assess and induce morally responsible behavior in a community (small scale or large scale), tools of psychological inquiry is used for qualitative analysis. The analysis will deal with varied data sets from two sectors namely: River and its geology, Land use planning and regulation. Identification of a distinctive pattern in the built up growth, river ecology degradation, and human behavior, by handling large quantum of data from the diverse sector and comments on the availability of relevant data and its implications, has been done. Along the whole river stretch, condition and usage of its bank vary, hence stakeholder specific survey questionnaires have been prepared to accurately map the responses and habits of the rational inhabitants. A conceptual framework has been designed to move forward with the empirical analysis. The classical principle of virtues says "virtue of a human depends on its character" but another concept defines that the behavior or response is a derivative of situations and to bring about a behavioral change one needs to introduce a disruption in the situation/environment. Owing to the present trends, blindly following the results of data analytics and using it to construct policy, is not proving to be in favor of planned development and natural resource conservation. Thus behavioral assessment of the rational inhabitants of the planet is also required, as their activities and interests have a large impact on the earth's pre-set systems and its sustenance.Keywords: behavioral assessment, flood plain encroachment, land use planning, river sustenance
Procedia PDF Downloads 1171298 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 941297 Modelling of Damage as Hinges in Segmented Tunnels
Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero
Abstract:
Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.Keywords: damage, hinges, lining, tunnel
Procedia PDF Downloads 3901296 Water Droplet Impact on Vibrating Rigid Superhydrophobic Surfaces
Authors: Jingcheng Ma, Patricia B. Weisensee, Young H. Shin, Yujin Chang, Junjiao Tian, William P. King, Nenad Miljkovic
Abstract:
Water droplet impact on surfaces is a ubiquitous phenomenon in both nature and industry. The transfer of mass, momentum and energy can be influenced by the time of contact between droplet and surface. In order to reduce the contact time, we study the influence of substrate motion prior to impact on the dynamics of droplet recoil. Using optical high speed imaging, we investigated the impact dynamics of macroscopic water droplets (~ 2mm) on rigid nanostructured superhydrophobic surfaces vibrating at 60 – 300 Hz and amplitudes of 0 – 3 mm. In addition, we studied the influence of the phase of the substrate at the moment of impact on total contact time. We demonstrate that substrate vibration can alter droplet dynamics, and decrease total contact time by as much as 50% compared to impact on stationary rigid superhydrophobic surfaces. Impact analysis revealed that the vibration frequency mainly affected the maximum contact time, while the amplitude of vibration had little direct effect on the contact time. Through mathematical modeling, we show that the oscillation amplitude influences the possibility density function of droplet impact at a given phase, and thus indirectly influences the average contact time. We also observed more vigorous droplet splashing and breakup during impact at larger amplitudes. Through semi-empirical mathematical modeling, we describe the relationship between contact time and vibration frequency, phase, and amplitude of the substrate. We also show that the maximum acceleration during the impact process is better suited as a threshold parameter for the onset of splashing than a Weber-number criterion. This study not only provides new insights into droplet impact physics on vibrating surfaces, but develops guidelines for the rational design of surfaces to achieve controllable droplet wetting in applications utilizing vibration.Keywords: contact time, impact dynamics, oscillation, pear-shape droplet
Procedia PDF Downloads 4541295 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 1741294 The Significance of Cultural Risks for Western Consultants Executing Gulf Cooperation Council Megaprojects
Authors: Alan Walsh, Peter Walker
Abstract:
Differences in commercial, professional and personal cultural traditions between western consultants and project sponsors in the Gulf Cooperation Council (GCC) region are potentially significant in the workplace, and this can impact on project outcomes. These cultural differences can, for example, result in conflict amongst senior managers, which can negatively impact the megaproject. New entrants to the GCC often experience ‘culture shock’ as they attempt to integrate into their unfamiliar environments. Megaprojects are unique ventures with individual project characteristics, which need to be considered when managing their associated risks. Megaproject research to date has mostly ignored the significance of the absence of cultural congruence in the GCC, which is surprising considering that there are large volumes of megaprojects in various stages of construction in the GCC. An initial step to dealing with cultural issues is to acknowledge culture as a significant risk factor (SRF). This paper seeks to understand the criticality for western consultants to address these risks. It considers the cultural barriers that exist between GCC sponsors and western consultants and examines the cultural distance between the key actors. Initial findings suggest the presence to a certain extent of ethnocentricity. Other cultural clashes arise out of a lack of appreciation of the customs, practices and traditions of ‘the Other’, such as the need for avoiding public humiliation and the hierarchal significance rankings. The concept and significance of cultural shock as part of the integration process for new arrivals are considered. Culture shock describes the state of anxiety and frustration resulting from the immersion in a culture distinctly different from one's own. There are potentially substantial project risks associated with underestimating the process of cultural integration. This paper examines two distinct but intertwined issues: the societal and professional culture differences associated with expatriate assignments. A case study examines the cultural congruences between GCC sponsors and American, British and German consultants, over a ten-year cycle. This provides indicators as to which nationalities encountered the most profound cultural issues and the nature of these. GCC megaprojects are typically intensive fast track demanding ventures, where consultant turnover is high. The study finds that building trust-filled relationships is key to successful project team integration and therefore, to successful megaproject execution. Findings indicate that both professional and social inclusion processes have steep learning curves. Traditional risk management practice is to approach any uncertainty in a structured way to mitigate the potential impact on project outcomes. This research highlights cultural risk as a significant factor in the management of GCC megaprojects. These risks arising from high staff turnover typically include loss of project knowledge, delays to the project, cost and disruption in replacing staff. This paper calls for cultural risk to be recognised as an SRF, as the first step to developing risk management strategies, and to reduce staff turnover for western consultants in GCC megaprojects.Keywords: western consultants in megaprojects, national culture impacts on GCC megaprojects, significant risk factors in megaprojects, professional culture in megaprojects
Procedia PDF Downloads 1331293 Carbon Footprint of Road Project for Sustainable Development: Lessons Learnt from Traffic Management of a Developing Urban Centre
Authors: Sajjad Shukur Ullah, Syed Shujaa Safdar Gardezi
Abstract:
Road infrastructure plays a vital role in the economic activities of any economy. Besides derived benefits from these facilities, the utilization of extensive energy resources, fuels, and materials results in a negative impact on the environment in terms of carbon footprint; carbon footprint is the overall amount of greenhouse gas (GHG) generated from any action. However, this aspect of environmental impact from road structure is not seriously considered during such developments, thus undermining a critical factor of sustainable development, which usually remains unaddressed, especially in developing countries. The current work investigates the carbon footprint impact of a small road project (0.8 km, dual carriageway) initiated for traffic management in an urban centre. Life cycle assessment (LCA) with boundary conditions of cradle to the site has been adopted. The only construction phase of the life cycle has been assessed at this stage. An impact of 10 ktons-CO2 (6260 ton-CO2/km) has been assessed. The rigid pavement dominated the contributions as compared to a flexible component. Among the structural elements, the underpass works shared the major portion. Among the materials, the concrete and steel utilized for various structural elements resulted in more than 90% of the impact. The earth-moving equipment was dominant in operational carbon. The results have highlighted that road infrastructure projects pose serious threats to the environment during their construction and which need to be considered during the approval stages. This work provides a guideline for supporting sustainable development that could only be ensured when such endeavours are properly assessed by industry professionals and decide various alternative environmental conscious solutions for the future.Keywords: construction waste management, kiloton, life cycle assessment, rigid pavement
Procedia PDF Downloads 991292 Adhesive Bonded Joints Characterization and Crack Propagation in Composite Materials under Cyclic Impact Fatigue and Constant Amplitude Fatigue Loadings
Authors: Andres Bautista, Alicia Porras, Juan P. Casas, Maribel Silva
Abstract:
The Colombian aeronautical industry has stimulated research in the mechanical behavior of materials under different loading conditions aircrafts are generally exposed during its operation. The Calima T-90 is the first military aircraft built in the country, used for primary flight training of Colombian Air Force Pilots, therefore, it may be exposed to adverse operating situations such as hard landings which cause impact loads on the aircraft that might produce the impact fatigue phenomenon. The Calima T-90 structure is mainly manufactured by composites materials generating assemblies and subassemblies of different components of it. The main method of bonding these components is by using adhesive joints. Each type of adhesive bond must be studied on its own since its performance depends on the conditions of the manufacturing process and operating characteristics. This study aims to characterize the typical adhesive joints of the aircraft under usual loads. To this purpose, the evaluation of the effect of adhesive thickness on the mechanical performance of the joint under quasi-static loading conditions, constant amplitude fatigue and cyclic impact fatigue using single lap-joint specimens will be performed. Additionally, using a double cantilever beam specimen, the influence of the thickness of the adhesive on the crack growth rate for mode I delamination failure, as a function of the critical energy release rate will be determined. Finally, an analysis of the fracture surface of the test specimens considering the mechanical interaction between the substrate (composite) and the adhesive, provide insights into the magnitude of the damage, the type of failure mechanism that occurs and its correlation with the way crack propagates under the proposed loading conditions.Keywords: adhesive, composites, crack propagation, fatigue
Procedia PDF Downloads 2041291 Studies on Distribution of the Doped Pr3+ Ions in the LaF3 Based Transparent Oxyfluoride Glass-Ceramic
Authors: Biswajit Pal, Amit Mallik, Anil K. Barik
Abstract:
Current years have witnessed a phenomenal growth in the research on the rare earth-doped transparent host materials, the essential components in optoelectronics that meet up the increasing demand for fabrication of high quality optical devices especially in telecommunication system. The combination of low phonon energy (because of fluoride environment) and high chemical durability with superior mechanical stability (due to oxide environment) makes the oxyfluoride glass–ceramics the promising and useful materials in optoelectronics. The present work reports on the undoped and doped (1 mol% Pr2O3) glass ceramics of composition 16.52 Al2O3•1.5AlF3• 12.65LaF3•4.33Na2O•64.85 SiO2 (mol%), prepared by melting technique initially that follows annealation at 450 ºC for 1 h. The glass samples so obtained were heat treated at constant 600 ºC with a variation in heat treatment schedule (10- 80 h). TEM techniques were employed to structurally characterize the glass samples. Pr2O3 affects the phase separation in the glass and delays the onset of crystallization in the glass ceramic. The modified crystallization mechanism is established from the analysis of advanced STEM/EDXS results. The phase separated droplets after annealing turn into 10-20 nm of LaF3 nano crystals those upon scrutiny are found to be dotted with the doped Pr3+ ions within the crystals themselves. The EDXS results also suggest that the inner LaF3 crystal core is swallowed by an Al enriched layer that follows a Si enriched surrounding shell as the outer core. This greatly increases the viscosity in the periphery of the crystals that restricts further crystal growth to account for the formation of nano sized crystals.Keywords: advanced STEM/EDXS, crystallization mechanism, nano crystals, pr3+ ion doped glass and glass ceramic, structural characterization
Procedia PDF Downloads 1851290 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)
Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri
Abstract:
Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase
Procedia PDF Downloads 2301289 Recent Progress in the Uncooled Mid-Infrared Lead Selenide Polycrystalline Photodetector
Authors: Hao Yang, Lei Chen, Ting Mei, Jianbang Zheng
Abstract:
Currently, the uncooled PbSe photodetectors in the mid-infrared range (2-5μm) with sensitization technology extract more photoelectric response than traditional ones, and enable the room temperature (300K) photo-detection with high detectivity, which have attracted wide attentions in many fields. This technology generally contains the film fabrication with vapor phase deposition (VPD) and a sensitizing process with doping of oxygen and iodine. Many works presented in the recent years almost provide and high temperature activation method with oxygen/iodine vapor diffusion, which reveals that oxygen or iodine plays an important role in the sensitization of PbSe material. In this paper, we provide our latest experimental results and discussions in the stoichiometry of oxygen and iodine and its influence on the polycrystalline structure and photo-response. The experimental results revealed that crystal orientation was transformed from (200) to (420) by sensitization, and the responsivity of 5.42 A/W was gained by the optimal stoichiometry of oxygen and iodine with molecular density of I2 of ~1.51×1012 mm-3 and oxygen pressure of ~1Mpa. We verified that I2 plays a role in transporting oxygen into the lattice of crystal, which is actually not its major role. It is revealed that samples sensitized with iodine transform atomic proportion of Pb from 34.5% to 25.0% compared with samples without iodine from XPS data, which result in the proportion of about 1:1 between Pb and Se atoms by sublimation of PbI2 during sensitization process, and Pb/Se atomic proportion is controlled by I/O atomic proportion in the polycrystalline grains, which is very an important factor for improving responsivity of uncooled PbSe photodetector. Moreover, a novel sensitization and dopant activation method is proposed using oxygen ion implantation with low ion energy of < 500eV and beam current of ~120μA/cm2. These results may be helpful to understanding the sensitization mechanism of polycrystalline lead salt materials.Keywords: polycrystalline PbSe, sensitization, transport, stoichiometry
Procedia PDF Downloads 3491288 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 1021287 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 1071286 Formulation and Characterization of NaCS-PDMDAAC Capsules with Immobilized Chlorella vulgaris for Phycoremediation of Palm Oil Mill Effluent
Authors: Quin Emparan, Razif Harun, Dayang R. A. Biak, Rozita Omar, Michael K. Danquah
Abstract:
Cultivation of immobilized microalgae cells is on the rise for biotechnological applications. In this study, cultivation of Chlorella vulgaris was carried out in the form of suspended free-cell and immobilized cells system. NaCS-PDMDAAC capsules were used to immobilize C. vulgaris. Initially, the synthesized NaCS with C. vulgaris culture were prepared at various concentration of 5- 20% (w/v) using a 6% hardening solution (PDMDAAC) to investigate the capsules' gel stability and suitability for microalgae cells growth. Then, the capsules produced from 15% NaCS with C. vulgaris culture were furthered investigated using 5%, 10%, and 15% (w/v) of PDMDAAC solution. The capsules' gel stability was evaluated through dissolution time and loss of uniform spherical shape of capsules, while suitability for microalgae cells growth was evaluated through the optical density of microalgae. In this study, the 15% NaCS-10% PDMDAAC capsules were found to be the most suitable to sustain the capsules' gel stability and microalgae cells growth in MLA. For that reason, the C. vulgaris immobilized in the 15% NaCS-10% PDMDAAC capsules were further characterized using physicochemical analysis in terms of morphological, carbon (C), hydrogen (H) and nitrogen (N), Fourier transform-infrared (FT-IR), scanning electron microscopy-energy dispersive X-ray (SEM-EDX), zeta potential and Brunauer-Emmet-Teller (BET) analyses. The results revealed that the presence of sulfonates in the synthesized NaCS and NaCS-PDMDAAC capsules without and with C. vulgaris proves that cellulose alcohol group was successfully bonded by sulfo group. Besides that, immobilized microalgae cells have a smaller cell size of 6.29 ± 1.09 µm and zeta potential of -11.93 ± 0.91 mV than suspended free-cells microalgae culture. It can be summarized that immobilization of C. vulgaris in the 15% NaCS-10% PDMDAAC capsules are relevant as a bioremediator for wastewater treatment purposes due to its suitable size of pore and capsules as well as structural and compositional properties.Keywords: biological capsules, immobilized cultivation, microalgae, physico-chemical analysis
Procedia PDF Downloads 1721285 Film Dosimetry – An Asset for Collaboration Between Cancer Radiotherapy Centers at Established Institutions and Those Located in Low- and Middle-Income Countries
Authors: A. Fomujong, P. Mobit, A. Ndlovu, R. Teboh
Abstract:
Purpose: Film’s unique qualities, such as tissue equivalence, high spatial resolution, near energy independence and comparatively less expensive dosimeter, ought to make it the preferred and widely used in radiotherapy centers in low and middle income countries (LMICs). This, however, is not always the case, as other factors that are often maybe taken for granted in advanced radiotherapy centers remain a challenge in LMICs. We explored the unique qualities of film dosimetry that can make it possible for one Institution to benefit from another’s protocols via collaboration. Methods: For simplicity, two Institutions were considered in this work. We used a single batch of films (EBT-XD) and established a calibration protocol, including scan protocols and calibration curves, using the radiotherapy delivery system at Institution A. We then proceeded and performed patient-specific QA for patients treated on system A (PSQA-A-A). Films from the same batch were then sent to a remote center for PSQA on radiotherapy delivery system B. Irradiations were done at Institution B and then returned to Institution A for processing and analysis (PSQA-B-A). The following points were taken into consideration throughout the process (a) A reference film was irradiated to a known dose on the same system irradiating the PSQA film. (b) For calibration, we utilized the one-scan protocol and maintained the same scan orientation of the calibration, PSQA and reference films. Results: Gamma index analysis using a dose threshold of 10% and 3%/2mm criteria showed a gamma passing rate of 99.8% and 100% for the PSQA-A-A and PSQA-B-A, respectively. Conclusion: This work demonstrates that one could use established film dosimetry protocols in one Institution, e.g., an advanced radiotherapy center and apply similar accuracies to irradiations performed at another institution, e.g., a center located in LMIC, which thus encourages collaboration between the two for worldwide patient benefits.Keywords: collaboration, film dosimetry, LMIC, radiotherapy, calibration
Procedia PDF Downloads 751284 Effect of Cryogenic Pre-stretching on the Room Temperature Tensile Behavior of AZ61 Magnesium Alloy and Dominant Grain Growth Mechanisms During Subsequent Annealing
Authors: Umer Masood Chaudry, Hafiz Muhammad Rehan Tariq, Chung-soo Kim, Tea-sung Jun
Abstract:
This study explored the influence of pre-stretching temperature on the microstructural characteristics and deformation behavior of AZ61 magnesium alloy and its implications on grain growth during subsequent annealing. AZ61 alloy was stretched to 5% plastic strain along rolling (RD) and transverse direction (TD) at room (RT) and cryogenic temperature (-150 oC, CT) followed by annealing at 320 oC for 1 h to investigate the twinning and dislocation evolution and its consequent effect on the flow stress, plastic strain and strain hardening rate. Compared to RT-stretched samples, significant improvement in yield stress, strain hardening rate and moderate reduction in elongation to failure were witnessed for CT-stretched samples along RD and TD. The subsequent EBSD analysis revealed the increased fraction of fine {10-12} twins and nucleation of multiple {10-12} twin variants caused by higher local stress concentration at the grain boundaries in CT-stretched samples as manifested by the kernel average misorientation. This higher twin fraction and twin-twin interaction imposed the strengthening by restricting the mean free path of dislocations, leading to higher flow stress and strain hardening rate. During annealing of the RT/CT-stretched samples, the residual strain energy and twin boundaries were decreased due to static recovery, leading to a coarse-grained twin-free microstructure. Strain induced boundary migration (SBIM) was found to be the predominant mechanism governing the grain growth during annealing via movement of high angle grain boundaries.Keywords: magnesium, twinning, twinning variant selection, EBSD, cryogenic deformation
Procedia PDF Downloads 671283 Study of Secondary Metabolites of Sargassum Algae: Anticorrosive and Antibacterial Activities
Authors: Prescilla Lambert, Christophe Roos, Mounim Lebrini
Abstract:
For several years, the Caribbean islands and West Africa have had to deal with the massive arrival of the brown seaweed Sargassum. Overall, this macroalgae, which constitutes a habitat for a great diversity of marine organisms, is also an additional stress factor for the marine environment (e.g., coral reefs). In addition, the accumulation followed by the significant decomposition of the Sargassum spp. biomass on the coast leads to the release of toxic gases (H₂S and NH₃), which calls into question the functioning of the economic, health and tourist life of the island and the other interested territories. Originally, these algae are formed by the eutrophication of the oceans accentuated by global warming. Unfortunately, scientists predict a significant recurrence of these Sargassum strandings for years to come. It is therefore more than necessary to find solutions by putting in place a sustainable management plan for this phenomenon. Martinique, a small island in the Caribbean arc, is one of the many areas impacted by Sargassum seaweed strandings. Since 2011, there has been a constant increase in the degradation of the materials present in this region, largely due to toxic/corrosive gases released by the algae decomposition. In order to protect the structures and the vulnerable building materials while limiting the use of synthetic/petroleum based molecules as much as possible, research is being conducted on molecules of natural origin. Thus, thanks to the chemical composition, which comprise molecules with interesting properties, algae such as Sargassum could potentially help to solve many issues. Therefore, this study focuses on the green extraction and characterization of molecules from the species Sargassum fluitans and Sargassum natans present in Martinique. The secondary metabolites found in these extracts showed variability in yield rates due to local climatic conditions. The tests carried out shed light on the anticorrosive and antibacterial potential of the algae. These extracts can thus be described as natural inhibitors. The effect of variation in inhibitor concentrations was tested in electrochemistry using electrochemical impedance spectroscopy and polarization curves. The analysis of electrochemical results obtained by direct immersion in the extracts and self-assembled molecular layers (SAMs) for Sargassum fluitans III, Sargassum natans I and VIII species was conclusive in acid and alkaline environments. The excellent results obtained reveal an inhibitory efficacy of 88% at 50mg/L for the crude extract of Sargassum fluitans III and efficacies greater than 97% for the chemical families of Sargassum fluitans III. Similarly, microbiological tests also suggest a bactericidal character. Results for Sargassum fluitans III crude extract show a minimum inhibitory concentration (MIC) of 0.005 mg/mL on Gram-negative bacteria and a MIC greater than 0.6 mg/mL on Gram-positive bacteria. These results make it possible to consider the management of local and international issues while valuing a biomass rich in biodegradable molecules. The next step in this study will therefore be the evaluation of the toxicity of Sargassum spp..Keywords: Sargassum, secondary metabolites, anticorrosive, antibacterial, natural inhibitors
Procedia PDF Downloads 721282 BI- And Tri-Metallic Catalysts for Hydrogen Production from Hydrogen Iodide Decomposition
Authors: Sony, Ashok N. Bhaskarwar
Abstract:
Production of hydrogen from a renewable raw material without any co-synthesis of harmful greenhouse gases is the current need for sustainable energy solutions. The sulfur-iodine (SI) thermochemical cycle, using intermediate chemicals, is an efficient process for producing hydrogen at a much lower temperature than that required for the direct splitting of water. No net byproduct forms in the cycle. Hydrogen iodide (HI) decomposition is a crucial reaction in this cycle, as the product, hydrogen, forms only in this step. It is an endothermic, reversible, and equilibrium-limited reaction. The theoretical equilibrium conversion at 550°C is just a meagre of 24%. There is a growing interest, therefore, in enhancing the HI conversion to near-equilibrium values at lower reaction temperatures and by possibly improving the rate. The reaction is relatively slow without a catalyst, and hence catalytic decomposition of HI has gained much significance. Bi-metallic Ni-Co, Ni-Mn, Co-Mn, and tri-metallic Ni-Co-Mn catalysts over zirconia support were tested for HI decomposition reaction. The catalysts were synthesized via a sol-gel process wherein Ni was 3wt% in all the samples, and Co and Mn had equal weight ratios in the Co-Mn catalyst. Powdered X-ray diffraction and Brunauer-Emmett-Teller surface area characterizations indicated the polycrystalline nature and well-developed mesoporous structure of all the samples. The experiments were performed in a vertical laboratory-scale packed bed reactor made of quartz, and HI (55 wt%) was fed along with nitrogen at a WHSV of 12.9 hr⁻¹. Blank experiments at 500°C for HI decomposition suggested conversion of less than 5%. The activities of all the different catalysts were checked at 550°C, and the highest conversion of 23.9% was obtained with the tri-metallic 3Ni-Co-Mn-ZrO₂ catalyst. The decreasing order of the performance of catalysts could be expressed as: 3Ni-Co-Mn-ZrO₂ > 3Ni-2Co-ZrO₂ > 3Ni-2Mn-ZrO₂ > 2.5Co-2.5Mn-ZrO₂. The tri-metallic catalyst remained active till 360 mins at 550°C without any observable drop in its activity/stability. Among the explored catalyst compositions, the tri-metallic catalyst certainly has a better performance for HI conversion when compared to the bi-metallic ones. Owing to their low costs and ease of preparation, these trimetallic catalysts could be used for large-scale hydrogen production.Keywords: sulfur-iodine cycle, hydrogen production, hydrogen iodide decomposition, bi-, and tri-metallic catalysts
Procedia PDF Downloads 1871281 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1901280 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 2171279 The Effects of Nanoemulsions Based on Commercial Oils for the Quality of Vacuum-Packed Sea Bass at 2±2°C
Authors: Mustafa Durmuş, Yesim Ozogul, Esra Balıkcı, Saadet Gokdoğan, Fatih Ozogul, Ali Rıza Köşker, İlknur Yuvka
Abstract:
Food scientists and researchers have paid attention to develop new ways for improving the nutritional value of foods. The application of nanotechnology techniques to the food industry may allow the modification of food texture, taste, sensory attributes, coloring strength, processability, and stability during shelf life of products. In this research, the effects of nanoemulsions based on commercial oils for vacuum-packed sea bass fillets stored at 2±2°C were investigated in terms of the sensory, chemical (total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA), peroxide value (PV) and free fatty acids (FFA), pH, water holding capacity (WHC)) and microbiological qualities (total anaerobic bacteria and total lactic acid bacteria). Physical properties of emulsions (viscosity, the particle size of droplet, thermodynamic stability, refractive index, and surface tension) were determined. Nanoemulsion preparation method was based on high energy principle, with ultrasonic homojenizator. Sensory analyses of raw fish showed that the demerit points of the control group were found higher than those of treated groups. The sensory score (odour, taste and texture) of the cooked fillets decreased with storage time, especially in the control. Results obtained from chemical and microbiological analyses also showed that nanoemulsions significantly (p<0.05) decreased the values of biochemical parameters and growth of bacteria during storage period, thus improving quality of vacuum-packed sea bass.Keywords: quality parameters, nanoemulsion, sea bass, shelf life, vacuum packing
Procedia PDF Downloads 4591278 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 2221277 Microstructure and Mechanical Properties of Low Alloy Steel with Double Austenitizing Tempering Heat Treatment
Authors: Jae-Ho Jang, Jung-Soo Kim, Byung-Jun Kim, Dae-Geun Nam, Uoo-Chang Jung, Yoon-Suk Choi
Abstract:
Low alloy steels are widely used for pressure vessels, spent fuel storage, and steam generators required to withstand the internal pressure and prevent unexpected failure in nuclear power plants, which these may suffer embrittlement by high levels of radiation and heat for a long period. Therefore, it is important to improve mechanical properties of low alloy steels for the integrity of structure materials at an early stage of fabrication. Recently, it showed that a double austenitizing and tempering (DTA) process resulted in a significant improvement of strength and toughness by refinement of prior austenite grains. In this study, it was investigated that the mechanism of improving mechanical properties according to the change of microstructure by the second fully austenitizing temperature of the DAT process for low alloy steel required the structural integrity. Compared to conventional single austenitizing and tempering (SAT) process, the tensile elongation properties have improved about 5%, DBTTs have obtained result in reduction of about -65℃, and grain size has decreased by about 50% in the DAT process conditions. Grain refinement has crack propagation interference effect due to an increase of the grain boundaries and amount of energy absorption at low temperatures. The higher first austenitizing temperature in the DAT process, the more increase the spheroidized carbides and strengthening the effect of fine precipitates in the ferrite grain. The area ratio of the dimple in the transition area has increased by proportion to the effect of spheroidized carbides. This may the primary mechanisms that can improve low-temperature toughness and elongation while maintaining a similar hardness and strength.Keywords: double austenitizing, Ductile Brittle transition temperature, grain refinement, heat treatment, low alloy steel, low-temperature toughness
Procedia PDF Downloads 5101276 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe
Authors: Stanley Russell
Abstract:
Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.Keywords: modern art, architecture, design methodologies, modern architecture
Procedia PDF Downloads 1271275 Territorial Brand as a Means of Structuring the French Wood Industry
Authors: Laetitia Dari
Abstract:
The brand constitutes a source of differentiation between competitors. It highlights specific characteristics that create value for the enterprise. Today the concept of a brand is not just about the product but can concern territories. The competition between territories, due to tourism, research, jobs, etc., leads territories to develop territorial brands to bring out their identity and specificity. Some territorial brands are based on natural resources or products characteristic of a territory. In the French wood sector, we can observe the emergence of many territorial brands. Supported by the inter-professional organization, these brands have the main objective of showcasing wood as a source of solutions at the local level in terms of construction and energy. The implementation of these collective projects raises the question of the way in which relations between companies are structured and animated. The central question of our work is to understand how the territorial brand promotes the structuring of a sector and the construction of collective relations between actors. In other words, we are interested in the conditions for the emergence of the territorial brand and the way in which it will be a means of mobilizing the actors around a common project. The objectives of the research are (1) to understand in which context a territorial brand emerges, (2) to analyze the way in which the territorial brand structures the collective relations between actors, (3) to give entry keys to the actors to successfully develop this type of project. Thus, our research is based on a qualitative methodology with semi-structured interviews conducted with the main territorial brands in France. The research will answer various academic and empirical questions. From an academic point of view, it brings elements of understanding to the construction of a collective project and to the way in which governance operates. From an empirical point of view, the interest of our work is to bring out the key success factors in the development of a territorial brand and how the brand can become an element of valuation for a territory.Keywords: brand, marketing, strategy, territory, third party stakeholder, wood
Procedia PDF Downloads 671274 The Structure and Function Investigation and Analysis of the Automatic Spin Regulator (ASR) in the Powertrain System of Construction and Mining Machines with the Focus on Dump Trucks
Authors: Amir Mirzaei
Abstract:
The powertrain system is one of the most basic and essential components in a machine. The occurrence of motion is practically impossible without the presence of this system. When power is generated by the engine, it is transmitted by the powertrain system to the wheels, which are the last parts of the system. Powertrain system has different components according to the type of use and design. When the force generated by the engine reaches to the wheels, the amount of frictional force between the tire and the ground determines the amount of traction and non-slip or the amount of slip. At various levels, such as icy, muddy, and snow-covered ground, the amount of friction coefficient between the tire and the ground decreases dramatically and considerably, which in turn increases the amount of force loss and the vehicle traction decreases drastically. This condition is caused by the phenomenon of slipping, which, in addition to the waste of energy produced, causes the premature wear of driving tires. It also causes the temperature of the transmission oil to rise too much, as a result, causes a reduction in the quality and become dirty to oil and also reduces the useful life of the clutches disk and plates inside the transmission. this issue is much more important in road construction and mining machinery than passenger vehicles and is always one of the most important and significant issues in the design discussion, in order to overcome. One of these methods is the automatic spin regulator system which is abbreviated as ASR. The importance of this method and its structure and function have solved one of the biggest challenges of the powertrain system in the field of construction and mining machinery. That this research is examined.Keywords: automatic spin regulator, ASR, methods of reducing slipping, methods of preventing the reduction of the useful life of clutches disk and plate, methods of preventing the premature dirtiness of transmission oil, method of preventing the reduction of the useful life of tires
Procedia PDF Downloads 79