Search results for: dynamic database
1448 Rheological and Computational Analysis of Crude Oil Transportation
Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh
Abstract:
Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity
Procedia PDF Downloads 4551447 Impact and Risk Assessment of Climate Change on Water Quality: A Study in the Errer River Basin, Taiwan
Authors: Hsin-Chih Lai, Yung-Lung Lee, Yun-Yao Chi, Ching-Yi Horng, Pei-Chih Wu, Hsien-Chang Wang
Abstract:
Taiwan, a climatically challenged island, has always been keen on the issue of water resource management due to its limitations in water storage. Since water resource management has been the focal point of many adaptations to climate change, there has been a lack of attention on another issue, water quality. This study chooses the Errer River Basin as the experimental focus for water quality in Taiwan. With the Errer River Basin being one of the most polluted rivers in Taiwan, this study observes the effects of climate change on this river over a period of time. Taiwan is also targeted by multiple typhoons every year, the heavy rainfall and strong winds create problems of pollution being carried to different river segments, including into the ocean. This study aims to create an impact and risk assessment on Errer River Basin, to show the connection from climate change to potential extreme events, which in turn could influence water quality and ultimately human health. Using dynamic downscaling, this study narrows the information from a global scale to a resolution of 1 km x 1 km. Then, through interpolation, the resolution is further narrowed into a resolution of 200m x 200m, to analyze the past, present, and future of extreme events. According to different climate change scenarios, this study designs an assessment index on the vulnerability of the Errer River Basin. Through this index, Errer River inhabitants can access advice on adaptations to climate change and act accordingly.Keywords: climate change, adaptation, water quality, risk assessment
Procedia PDF Downloads 3521446 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots
Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu
Abstract:
Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law
Procedia PDF Downloads 841445 Haptic Robotic Glove for Tele-Exploration of Explosive Devices
Authors: Gizem Derya Demir, Ilayda Yankilic, Daglar Karamuftuoglu, Dante Dorantes
Abstract:
ABSTRACT HAPTIC ROBOTIC GLOVE FOR TELE-EXPLORATION OF EXPLOSIVE DEVICES Gizem Derya Demir, İlayda Yankılıç, Dağlar Karamüftüoğlu, Dante J. Dorantes-González Department of Mechanical Engineering, MEF University Ayazağa Cad. No.4, 34396 Maslak, Sarıyer, İstanbul, Turkey Nowadays, terror attacks are, unfortunately, a more common threat around the world. Therefore, safety measures have become much more essential. An alternative to providing safety and saving human lives is done by robots, such as disassembling and liquidation of bombs. In this article, remote exploration and manipulation of potential explosive devices from a safe-distance are addressed by designing a novel, simple and ergonomic haptic robotic glove. SolidWorks® Computer-Aided Design, computerized dynamic simulation, and MATLAB® kinematic and static analysis were used for the haptic robotic glove and finger design. Angle controls of servo motors were made using ARDUINO® IDE codes on a Makeblock® MegaPi control card. Simple grasping dexterity solutions for the fingers were obtained using one linear soft and one angle sensors for each finger, and six servo motors are used in total to remotely control a slave multi-tooled robotic hand. This project is still undergoing and presents current results. Future research steps are also presented.Keywords: Dexterity, Exoskeleton, Haptics , Position Control, Robotic Hand , Teleoperation
Procedia PDF Downloads 1781444 Information Overload, Information Literacy and Use of Technology by Students
Authors: Elena Krelja Kurelović, Jasminka Tomljanović, Vlatka Davidović
Abstract:
The development of web technologies and mobile devices makes creating, accessing, using and sharing information or communicating with each other simpler every day. However, while the amount of information constantly increasing it is becoming harder to effectively organize and find quality information despite the availability of web search engines, filtering and indexing tools. Although digital technologies have overall positive impact on students’ lives, frequent use of these technologies and digital media enriched with dynamic hypertext and hypermedia content, as well as multitasking, distractions caused by notifications, calls or messages; can decrease the attention span, make thinking, memorizing and learning more difficult, which can lead to stress and mental exhaustion. This is referred to as “information overload”, “information glut” or “information anxiety”. Objective of this study is to determine whether students show signs of information overload and to identify the possible predictors. Research was conducted using a questionnaire developed for the purpose of this study. The results show that students frequently use technology (computers, gadgets and digital media), while they show moderate level of information literacy. They have sometimes experienced symptoms of information overload. According to the statistical analysis, higher frequency of technology use and lower level of information literacy are correlated with larger information overload. The multiple regression analysis has confirmed that the combination of these two independent variables has statistically significant predictive capacity for information overload. Therefore, the information science teachers should pay attention to improving the level of students’ information literacy and educate them about the risks of excessive technology use.Keywords: information overload, computers, mobile devices, digital media, information literacy, students
Procedia PDF Downloads 2791443 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 3641442 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea
Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti
Abstract:
This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey
Procedia PDF Downloads 2361441 Co-Alignment of Comfort and Energy Saving Objectives for U.S. Office Buildings and Restaurants
Authors: Lourdes Gutierrez, Eric Williams
Abstract:
Post-occupancy research shows that only 11% of commercial buildings met the ASHRAE thermal comfort standard. Many buildings are too warm in winter and/or too cool in summer, wasting energy and not providing comfort. In this paper, potential energy savings in U.S. offices and restaurants if thermostat settings are calculated according the updated ASHRAE 55-2013 comfort model that accounts for outdoor temperature and clothing choice for different climate zones. eQUEST building models are calibrated to reproduce aggregate energy consumption as reported in the U.S. Commercial Building Energy Consumption Survey. Changes in energy consumption due to the new settings are analyzed for 14 cities in different climate zones and then the results are extrapolated to estimate potential national savings. It is found that, depending on the climate zone, each degree increase in the summer saves 0.6 to 1.0% of total building electricity consumption. Each degree the winter setting is lowered saves 1.2% to 8.7% of total building natural gas consumption. With new thermostat settings, national savings are 2.5% of the total consumed in all office buildings and restaurants, summing up to national savings of 69.6 million GJ annually, comparable to all 2015 total solar PV generation in US. The goals of improved comfort and energy/economic savings are thus co-aligned, raising the importance of thermostat management as an energy efficiency strategy.Keywords: energy savings quantifications, commercial building stocks, dynamic clothing insulation model, operation-focused interventions, energy management, thermal comfort, thermostat settings
Procedia PDF Downloads 3021440 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon
Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn
Abstract:
The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.Keywords: land use and land cover change, change detection, image processing, support vector machines
Procedia PDF Downloads 1391439 Modeling Battery Degradation for Electric Buses: Assessment of Lifespan Reduction from In-Depot Charging
Authors: Anaissia Franca, Julian Fernandez, Curran Crawford, Ned Djilali
Abstract:
A methodology to estimate the state-of-charge (SOC) of battery electric buses, including degradation effects, for a given driving cycle is presented to support long-term techno-economic analysis integrating electric buses and charging infrastructure. The degradation mechanisms, characterized by both capacity and power fade with time, have been modeled using an electrochemical model for Li-ion batteries. Iterative changes in the negative electrode film resistance and decrease in available lithium as a function of utilization is simulated for every cycle. The cycles are formulated to follow typical transit bus driving patterns. The power and capacity decay resulting from the degradation model are introduced as inputs to a longitudinal chassis dynamic analysis that calculates the power consumption of the bus for a given driving cycle to find the state-of-charge of the battery as a function of time. The method is applied to an in-depot charging scenario, for which the bus is charged exclusively at the depot, overnight and to its full capacity. This scenario is run both with and without including degradation effects over time to illustrate the significant impact of degradation mechanisms on bus performance when doing feasibility studies for a fleet of electric buses. The impact of battery degradation on battery lifetime is also assessed. The modeling tool can be further used to optimize component sizing and charging locations for electric bus deployment projects.Keywords: battery electric bus, E-bus, in-depot charging, lithium-ion battery, battery degradation, capacity fade, power fade, electric vehicle, SEI, electrochemical models
Procedia PDF Downloads 3251438 Factors Affecting the Success of Premarital Screening Service in Middle Eastern Islamic Countries
Authors: Wafa Al Jabri
Abstract:
Background: In Middle Eastern Islamic Countries (MEICs), there is a high prevalence of genetic blood disorders (GBDs), particularly sickle cell disease and thalassemia. The GBDs are considered a major public health concern, especially with the increase in affected populations along with the associated psychological, social, and financial cost of management. Despite the availability of premarital screening services (PSS) that aim to identify the asymptomatic carriers of GBDs and provide genetic counseling to couples in order toreduce the prevalence of these diseases; yet, the success rate of PSS is very low due to religious and socio-cultural concerns. Purpose: This paper aims to highlight the factors that affect the success of PSS in MEICs. Methods: A literature review of articles located in CINAHL, PubMed, SCOPUS, and MedLinewas carried out using the following terms: “premarital screening,” “success,” “effectiveness,” and “ genetic blood disorders.” Second, a hand search of the reference lists and Google searches were conducted to find studies that did not exist in the primary database searches. Only studies which are conducted in MEICs countries and published in the last five years were included. Studies that were not published in English were excluded. Results: Fourteen articles were included in the review. The results showed that PSS in most of the MEICs was successful in achieving its objective of identifying high-risk marriages; however, the service failed to meetitsultimate goal of reducing the prevalence of GBDs. Various factors seem to hinder the success of PSS, including poor public awareness, late timing of the screening, culture and social stigma, religious beliefs, availability of prenatal diagnosis and therapeutic abortion, emotional factors, and availability of genetic counseling services. However, poor public awareness, late timing of the screening, and unavailability of adequate counseling services were the most common barriers identified. Conclusion: Overcoming the identified barriers by providing effective health education programs, offering the screening test to young adults at an earlier stage, and tailoring the genetic counseling would be crucial steps to provide a framework for an effective PSS in MEICs.Keywords: premarital screening, success, effectiveness, and genetic blood disorders
Procedia PDF Downloads 991437 Hydrodynamic Modeling of the Hydraulic Threshold El Haouareb
Authors: Sebai Amal, Massuel Sylvain
Abstract:
Groundwater is the key element of the development of most of the semi-arid areas where water resources are increasingly scarce due to an irregularity of precipitation, on the one hand, and an increasing demand on the other hand. This is the case of the watershed of the Central Tunisia Merguellil, object of the present study, which focuses on an implementation of an underground flows hydrodynamic model to understand the recharge processes of the Kairouan’s plain groundwater by aquifers boundary through the hydraulic threshold of El Haouareb. The construction of a conceptual geological 3D model by the Hydro GeoBuilder software has led to a definition of the aquifers geometry in the studied area thanks to the data acquired by the analysis of geologic sections of drilling and piezometers crossed shells partially or in full. Overall analyses of the piezometric Chronicles of different piezometers located at the level of the dam indicate that the influence of the dam is felt especially in the aquifer carbonate which confirms that the dynamics of this aquifer are highly correlated to the dam’s dynamic. Groundwater maps, high and low-water dam, show a flow that moves towards the threshold of El Haouareb to the discharge of the waters of Ain El Beidha discharge towards the plain of Kairouan. Software FEFLOW 5.2 steady hydrodynamic modeling to simulate the hydraulic threshold at the level of the dam El Haouareb in a satisfactory manner. However, the sensitivity study to the different parameters shows equivalence problems and a fix to calibrate the limestones’ permeability. This work could be improved by refining the timing steady and amending the representation of limestones in the model.Keywords: Hydrodynamic modeling, lithological modeling, hydraulic, semi-arid, merguellil, central Tunisia
Procedia PDF Downloads 7641436 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 1301435 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 4181434 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 1011433 Modification of the Risk for Incident Cancer with Changes in the Metabolic Syndrome Status: A Prospective Cohort Study in Taiwan
Authors: Yung-Feng Yen, Yun-Ju Lai
Abstract:
Background: Metabolic syndrome (MetS) is reversible; however, the effect of changes in MetS status on the risk of incident cancer has not been extensively studied. We aimed to investigate the effects of changes in MetS status on incident cancer risk. Methods: This prospective, longitudinal study used data from Taiwan’s MJ cohort of 157,915 adults recruited from 2002–2016 who had repeated MetS measurements 5.2 (±3.5) years apart and were followed up for the new onset of cancer over 8.2 (±4.5) years. A new diagnosis of incident cancer in study individuals was confirmed by their pathohistological reports. The participants’ MetS status included MetS-free (n=119,331), MetS-developed (n=14,272), MetS-recovered (n=7,914), and MetS-persistent (n=16,398). We used the Fine-Gray sub-distribution method, with death as the competing risk, to determine the association between MetS changes and the risk of incident cancer. Results: During the follow-up period, 7,486 individuals had new development of cancer. Compared with the MetS-free group, MetS-persistent individuals had a significantly higher risk of incident cancer (adjusted hazard ratio [aHR], 1.10; 95% confidence interval [CI], 1.03-1.18). Considering the effect of dynamic changes in MetS status on the risk of specific cancer types, MetS persistence was significantly associated with a higher risk of incident colon and rectum, kidney, pancreas, uterus, and thyroid cancer. The risk of kidney, uterus, and thyroid cancer in MetS-recovered individuals was higher than in those who remained MetS but lower than MetS-persistent individuals. Conclusions: Persistent MetS is associated with a higher risk of incident cancer, and recovery from MetS may reduce the risk. The findings of our study suggest that it is imperative for individuals with pre-existing MetS to seek treatment for this condition to reduce the cancer risk.Keywords: metabolic syndrome change, cancer, risk factor, cohort study
Procedia PDF Downloads 781432 Financial Markets Integration between Morocco and France: Implications on International Portfolio Diversification
Authors: Abdelmounaim Lahrech, Hajar Bousfiha
Abstract:
This paper examines equity market integration between Morocco and France and its consequent implications on international portfolio diversification. In the absence of stock market linkages, Morocco can act as a diversification destination to European investors, allowing higher returns at a comparable level of risk in developed markets. In contrast, this attractiveness is limited if both financial markets show significant linkage. The research empirically measures financial market’s integration in by capturing the conditional correlation between the two markets using the Generalized Autoregressive Conditionally Heteroscedastic (GARCH) model. Then, the research uses the Dynamic Conditional Correlation (DCC) model of Engle (2002) to track the correlations. The research findings show that there is no important increase over the years in the correlation between the Moroccan and the French equity markets, even though France is considered Morocco’s first trading partner. Failing to prove evidence of the stock index linkage between the two countries, the volatility series of each market were assumed to change over time separately. Yet, the study reveals that despite the important historical and economic linkages between Morocco and France, there is no evidence that equity markets follow. The small correlations and their stationarity over time show that over the 10 years studied, correlations were fluctuating around a stable mean with no significant change at their level. Different explanations can be attributed to the absence of market linkage between the two equity markets.Keywords: equity market linkage, DCC GARCH, international portfolio diversification, Morocco, France
Procedia PDF Downloads 4421431 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling
Authors: Ahmad Odeh, Ahmad Jrade
Abstract:
Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.Keywords: building information modelling, energy, life cycle analysis, sustainablity
Procedia PDF Downloads 2691430 Molecular Dynamics Simulation for Vibration Analysis at Nanocomposite Plates
Authors: Babak Safaei, A. M. Fattahi
Abstract:
Polymer/carbon nanotube nanocomposites have a wide range of promising applications Due to their enhanced properties. In this work, free vibration analysis of single-walled carbon nanotube-reinforced composite plates is conducted in which carbon nanotubes are embedded in an amorphous polyethylene. The rule of mixture based on various types of plate model namely classical plate theory (CLPT), first-order shear deformation theory (FSDT), and higher-order shear deformation theory (HSDT) was employed to obtain fundamental frequencies of the nanocomposite plates. Generalized differential quadrature (GDQ) method was used to discretize the governing differential equations along with the simply supported and clamped boundary conditions. The material properties of the nanocomposite plates were evaluated using molecular dynamic (MD) simulation corresponding to both short-(10,10) SWCNT and long-(10,10) SWCNT composites. Then the results obtained directly from MD simulations were fitted with those calculated by the rule of mixture to extract appropriate values of carbon nanotube efficiency parameters accounting for the scale-dependent material properties. The selected numerical results are presented to address the influences of nanotube volume fraction and edge supports on the value of fundamental frequency of carbon nanotube-reinforced composite plates corresponding to both long- and short-nanotube composites.Keywords: nanocomposites, molecular dynamics simulation, free vibration, generalized, differential quadrature (GDQ) method
Procedia PDF Downloads 3291429 A Comparative Analysis on Survival in Patients with Node Positive Cutaneous Head and Neck Squamous Cell Carcinoma as per TNM 7th and Tnm 8th Editions
Authors: Petr Daniel Edward Kovarik, Malcolm Jackson, Charles Kelly, Rahul Patil, Shahid Iqbal
Abstract:
Introduction: Recognition of the presence of extra capsular spread (ECS) has been a major change in the TNM 8th edition published by the American Joint Committee on Cancer in 2018. Irrespective of the size or number of lymph nodes, the presence of ECS makes N3b disease a stage IV disease. The objective of this retrospective observational study was to conduct a comparative analysis of survival outcomes in patients with lymph node-positive cutaneous head and neck squamous cell carcinoma (CHNSCC) based on their TNM 7th and TNM 8th editions classification. Materials and Methods: From January 2010 to December 2020, 71 patients with CHNSCC were identified from our centre’s database who were treated with radical surgery and adjuvant radiotherapy. All histopathological reports were reviewed, and comprehensive nodal mapping was performed. The data were collected retrospectively and survival outcomes were compared using TNM 7th and 8th editions. Results: The median age of the whole group of 71 patients was 78 years, range 54 – 94 years, 63 were male and 8 female. In total, 2246 lymph nodes were analysed; 195 were positive for cancer. ECS was present in 130 lymph nodes, which led to a change in TNM staging. The details on N-stage as per TNM 7th edition was as follows; pN1 = 23, pN2a = 14, pN2b = 32, pN2c = 0, pN3 = 2. After incorporating the TNM 8th edition criterion (presence of ECS), the details on N-stage were as follows; pN1 = 6, pN2a = 5, pN2b = 3, pN2c = 0, pN3a = 0, pN3b = 57. This showed an increase in overall stage. According to TNM 7th edition, there were 23 patients were with stage III and remaining 48 patients, stage IV. As per TNM 8th edition, there were only 6 patients with stage III as compared to 65 patients with stage IV. For all patients, 2-year disease specific survival (DSS) and overall survival (OS) were 70% and 46%. 5-year DSS and OS rates were 66% and 20% respectively. Comparing the survival between stage III and stage IV of the two cohorts using both TNM 7th and 8th editions, there is an obvious greater survival difference between the stages if TNM 8th staging is used. However, meaningful statistics were not possible as the majority of patients (n = 65) were with stage IV and only 6 patients were stage III in the TNM 8th cohort. Conclusion: Our study provides a comprehensive analysis on lymph node data mapping in this specific patient population. It shows a better differentiation between stage III and stage IV in the TNM 8th edition as compared to TNM 7th however meaningful statistics were not possible due to the imbalance of patients in the sub-cohorts of the groups.Keywords: cutaneous head and neck squamous cell carcinoma, extra capsular spread, neck lymphadenopathy, TNM 7th and 8th editions
Procedia PDF Downloads 1071428 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 4081427 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach
Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane
Abstract:
The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation
Procedia PDF Downloads 3121426 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 3651425 Building Data Infrastructure for Public Use and Informed Decision Making in Developing Countries-Nigeria
Authors: Busayo Fashoto, Abdulhakeem Shaibu, Justice Agbadu, Samuel Aiyeoribe
Abstract:
Data has gone from just rows and columns to being an infrastructure itself. The traditional medium of data infrastructure has been managed by individuals in different industries and saved on personal work tools; one of such is the laptop. This hinders data sharing and Sustainable Development Goal (SDG) 9 for infrastructure sustainability across all countries and regions. However, there has been a constant demand for data across different agencies and ministries by investors and decision-makers. The rapid development and adoption of open-source technologies that promote the collection and processing of data in new ways and in ever-increasing volumes are creating new data infrastructure in sectors such as lands and health, among others. This paper examines the process of developing data infrastructure and, by extension, a data portal to provide baseline data for sustainable development and decision making in Nigeria. This paper employs the FAIR principle (Findable, Accessible, Interoperable, and Reusable) of data management using open-source technology tools to develop data portals for public use. eHealth Africa, an organization that uses technology to drive public health interventions in Nigeria, developed a data portal which is a typical data infrastructure that serves as a repository for various datasets on administrative boundaries, points of interest, settlements, social infrastructure, amenities, and others. This portal makes it possible for users to have access to datasets of interest at any point in time at no cost. A skeletal infrastructure of this data portal encompasses the use of open-source technology such as Postgres database, GeoServer, GeoNetwork, and CKan. These tools made the infrastructure sustainable, thus promoting the achievement of SDG 9 (Industries, Innovation, and Infrastructure). As of 6th August 2021, a wider cross-section of 8192 users had been created, 2262 datasets had been downloaded, and 817 maps had been created from the platform. This paper shows the use of rapid development and adoption of technologies that facilitates data collection, processing, and publishing in new ways and in ever-increasing volumes. In addition, the paper is explicit on new data infrastructure in sectors such as health, social amenities, and agriculture. Furthermore, this paper reveals the importance of cross-sectional data infrastructures for planning and decision making, which in turn can form a central data repository for sustainable development across developing countries.Keywords: data portal, data infrastructure, open source, sustainability
Procedia PDF Downloads 981424 Comparison of Different Methods of Microorganism's Identification from a Copper Mining in Pará, Brazil
Authors: Louise H. Gracioso, Marcela P.G. Baltazar, Ingrid R. Avanzi, Bruno Karolski, Luciana J. Gimenes, Claudio O. Nascimento, Elen A. Perpetuo
Abstract:
Introduction: Higher copper concentrations promote a selection pressure on organisms such as plants, fungi and bacteria, which allows surviving only the resistant organisms to the contaminated site. This selective pressure keeps only the organisms most resistant to a specific condition and subsequently increases their bioremediation potential. Despite the bacteria importance for biosphere maintenance, it is estimated that only a small fraction living microbial species has been described and characterized. Due to the molecular biology development, tools based on analysis 16S ribosomal RNA or another specific gene are making a new scenario for the characterization studies and identification of microorganisms in the environment. News identification of microorganisms methods have also emerged like Biotyper (MALDI / TOF), this method mass spectrometry is subject to the recognition of spectroscopic patterns of conserved and features proteins for different microbial species. In view of this, this study aimed to isolate bacteria resistant to copper present in a Copper Processing Area (Sossego Mine, Canaan, PA) and identifies them in two different methods: Recent (spectrometry mass) and conventional. This work aimed to use them for a future bioremediation of this Mining. Material and Methods: Samples were collected at fifteen different sites of five periods of times. Microorganisms were isolated from mining wastes by culture enrichment technique; this procedure was repeated 4 times. The isolates were inoculated into MJS medium containing different concentrations of chloride copper (1mM, 2.5mM, 5mM, 7.5mM and 10 mM) and incubated in plates for 72 h at 28 ºC. These isolates were subjected to mass spectrometry identification methods (Biotyper – MALDI/TOF) and 16S gene sequencing. Results: A total of 105 strains were isolated in this area, bacterial identification by mass spectrometry method (MALDI/TOF) achieved 74% agreement with the conventional identification method (16S), 31% have been unsuccessful in MALDI-TOF and 2% did not obtain identification sequence the 16S. These results show that Biotyper can be a very useful tool in the identification of bacteria isolated from environmental samples, since it has a better value for money (cheap and simple sample preparation and MALDI plates are reusable). Furthermore, this technique is more rentable because it saves time and has a high performance (the mass spectra are compared to the database and it takes less than 2 minutes per sample).Keywords: copper mining area, bioremediation, microorganisms, identification, MALDI/TOF, RNA 16S
Procedia PDF Downloads 3781423 Event Related Brain Potentials Evoked by Carmen in Musicians and Dancers
Authors: Hanna Poikonen, Petri Toiviainen, Mari Tervaniemi
Abstract:
Event-related potentials (ERPs) evoked by simple tones in the brain have been extensively studied. However, in reality the music surrounding us is spectrally and temporally complex and dynamic. Thus, the research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation, which, in various forms, has always been an essential part of different cultures. In addition to sensory responses, music elicits vast cognitive and emotional processes in the brain. When compared to laymen, professional musicians have stronger ERP responses in processing individual musical features in simple tone sequences, such as changes in pitch, timbre and harmony. Here we show that the ERP responses evoked by rapid changes in individual musical features are more intense in musicians than in laymen, also while listening to long excerpts of the composition Carmen. Interestingly, for professional dancers, the amplitudes of the cognitive P300 response are weaker than for musicians but still stronger than for laymen. Also, the cognitive P300 latencies of musicians are significantly shorter whereas the latencies of laymen are significantly longer. In contrast, sensory N100 do not differ in amplitude or latency between musicians and laymen. These results, acquired from a novel ERP methodology for natural music, suggest that we can take the leap of studying the brain with long pieces of natural music also with the ERP method of electroencephalography (EEG), as has already been made with functional magnetic resonance (fMRI), as these two brain imaging devices complement each other.Keywords: electroencephalography, expertise, musical features, real-life music
Procedia PDF Downloads 4841422 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics
Procedia PDF Downloads 2931421 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter
Procedia PDF Downloads 1331420 Process Optimization for 2205 Duplex Stainless Steel by Laser Metal Deposition
Authors: Siri Marthe Arbo, Afaf Saai, Sture Sørli, Mette Nedreberg
Abstract:
This work aims to establish a reliable approach for optimizing a Laser Metal Deposition (LMD) process for a critical maritime component, based on the material properties and structural performance required by the maritime industry. The component of interest is a water jet impeller, for which specific requirements for material properties are defined. The developed approach is based on the assessment of the effects of LMD process parameters on microstructure and material performance of standard AM 2205 duplex stainless steel powder. Duplex stainless steel offers attractive properties for maritime applications, combining high strength, enhanced ductility and excellent corrosion resistance due to the specific amounts of ferrite and austenite. These properties are strongly affected by the microstructural characteristics in addition to microstructural defects such as porosity and welding defects, all strongly influenced by the chosen LMD process parameters. In this study, the influence of deposition speed and heat input was evaluated. First, the influences of deposition speed and heat input on the microstructure characteristics, including ferrite/austenite fraction, amount of porosity and welding defects, were evaluated. Then, the achieved mechanical properties were evaluated by standard testing methods, measuring the hardness, tensile strength and elongation, bending force and impact energy. The measured properties were compared to the requirements of the water jet impeller. The results show that the required amounts of ferrite and austenite can be achieved directly by the LMD process without post-weld heat treatments. No intermetallic phases were observed in the material produced by the investigated process parameters. A high deposition speed was found to reduce the ductility due to the formation of welding defects. An increased heat input was associated with reduced strength due to the coarsening of the ferrite/austenite microstructure. The microstructure characterizations and measured mechanical performance demonstrate the great potential of the LMD process and generate a valuable database for the optimization of the LMD process for duplex stainless steels.Keywords: duplex stainless steel, laser metal deposition, process optimization, microstructure, mechanical properties
Procedia PDF Downloads 2181419 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 256