Search results for: Matlab efficiency simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11286

Search results for: Matlab efficiency simulation

756 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 106
755 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 183
754 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 112
753 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator

Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah

Abstract:

The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.

Keywords: unconventional, resource, hydraulic, fracturing

Procedia PDF Downloads 297
752 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 61
751 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 304
750 Evaluation of Health Services after Emergency Decrees in Turkey

Authors: Sengul Celik, Alper Ketenci

Abstract:

In Turkish Constitution about health care in Article 56, it is said that: everyone has the right to live in a healthy and balanced environment. It is the duty of the state and citizens to improve the environment, protect environmental health, and prevent environmental pollution. The state ensures that everyone lives their lives in physical and mental health; it organizes the planning and service of health institutions from a single source in order to realize cooperation by increasing savings and efficiency in human and substance power. The state fulfills this task by utilizing and supervising health and social institutions in the public and private sectors. General health insurance can be established by law for the widespread delivery of health services. To have health care is one of the basic rights of patients. After the coupe attempt in July 2016, the Government of Turkey has announced a state of emergency and issued lots of emergency decrees. By these emergency decrees, lots of people were dismissed from their jobs and lost their some basic social rights. The violations occur in social life. One of the most common observations is the discrimination by government in health care system. This study aims to put forward the violation of human rights in health care system in Turkey due to their discriminated position by an emergency decree. The study is a case study that is based on nine interviews with the people or relatives of people who lost their jobs by an emergency decree in Turkey. In this study, no personally identifiable information was obtained for the safety of individuals. Also no distinctive questions regarding the identity of individuals were asked. The interviews are obtained through internet call applications. The data were analyzed through the requirements of regular health care system in Turkey. The interviews expose that the people or the relatives of people lost their right to have regular health care. They have to pay extra amount both in clinical services and in medication treatment. The patient right to quality medical care without prejudice is violated. It was assessed that the people who are involved in emergency decree and their relatives are discriminated by government and deprived of regular medical care and supervision. Although international legal arrangements and legal responsibilities of the state have been put forward by Article 56, they are violated in practice. To prevent these kinds of violations, some measures should be taken against the deprivation in health care system especially towards the discriminated people by an emergency decree.

Keywords: emergency decree in Turkey, health care, discriminated people, patients rights

Procedia PDF Downloads 107
749 Assessment of Microclimate in Abu Dhabi Neighborhoods: On the Utilization of Native Landscape in Enhancing Thermal Comfort

Authors: Maryam Al Mheiri, Khaled Al Awadi

Abstract:

Urban population is continuously increasing worldwide and the speed at which cities urbanize creates major challenges, particularly in terms of creating sustainable urban environments. Rapid urbanization often leads to negative environmental impacts and changes in the urban microclimates. Moreover, when rapid urbanization is paired with limited landscape elements, the effects on human health due to the increased pollution, and thermal comfort due to Urban Heat Island effects are increased. Urban Heat Island (UHI) describes the increase of urban temperatures in urban areas in comparison to its rural surroundings, and, as we discuss in this paper, it impacts on pedestrian comfort, reducing the number of walking trips and public space use. It is thus very necessary to investigate the quality of outdoor built environments in order to improve the quality of life incites. The main objective of this paper is to address the morphology of Emirati neighborhoods, setting a quantitative baseline by which to assess and compare spatial characteristics and microclimate performance of existing typologies in Abu Dhabi. This morphological mapping and analysis will help to understand the built landscape of Emirati neighborhoods in this city, whose form has changed and evolved across different periods. This will eventually help to model the use of different design strategies, such as landscaping, to mitigate UHI effects and enhance outdoor urban comfort. Further, the impact of different native plants types and native species in reducing UHI effects and enhancing outdoor urban comfort, allowing for the assessment of the impact of increasing landscaped areas in these neighborhoods. This study uses ENVI-met, an analytical, three-dimensional, high-resolution microclimate modeling software. This micro-scale urban climate model will be used to evaluate existing conditions and generate scenarios in different residential areas, with different vegetation surfaces and landscaping, and examine their impact on surface temperatures during summer and autumn. In parallel to these simulations, field measurement will be included to calibrate the Envi-met model. This research therefore takes an experimental approach, using simulation software, and a case study strategy for the evaluation of a sample of residential neighborhoods. A comparison of the results of these scenarios constitute a first step towards making recommendations about what constitutes sustainable landscapes for Abu Dhabi neighborhoods.

Keywords: landscape, microclimate, native plants, sustainable neighborhoods, thermal comfort, urban heat island

Procedia PDF Downloads 309
748 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 124
747 Scaling out Sustainable Land Use Systems in Colombia: Some Insights and Implications from Two Regional Case Studies

Authors: Martha Lilia Del Rio Duque, Michelle Bonatti, Katharina Loehr, Marcos Lana, Tatiana Rodriguez, Stefan Sieber

Abstract:

Nowadays, most agricultural practices can reduce the ability of ecosystems to provide goods and services. To enhance environmentally friendly food production and to maximize social and economic benefits, sustainable land use systems (SLUS) are one of the most critical strategies increasingly/strongly promoted by donors organizations, international agencies, and policymakers. This process involves the question of how SLUS can be scaled out also large-scale landscapes and not merely isolated experiments. As SLUS are context-specific strategies, diffusion and replication of successful SLUS in Colombia required the identification of main factors that facilitate this scaling out process. We applied a case study approach to investigate the scaling out process of SLUS in cocoa and livestock sector within peacebuilding territories in Colombia, specifically, in Cesar and Caqueta region. These two regions are contrasting, but both have a current trend of increasing land degradation. Presently in Colombia, Caqueta is one of the most deforested departments, and Cesar has some most degraded soils. Following a qualitative research approach, 19 semi-structured interviews and 2 focus groups were conducted with agroforestry experts in both regions to analyze (1) what does it mean a sustainable land use system in Cocoa/Livestock, specifically in Caqueta or Cesar and (2) to identify the key elements at the level of the following dimensions: biophysical, economic and profitability, market, social, policy and institutions that can explain how and why SLUS are replicated and spread among more producers. The Interviews were coded and analyzed using MAXQDA to identify, analyze and report patterns (themes) within data. As the results show, key themes, among which: premium market, solid regional markets and price stability, water availability and management, generational renewal, land use knowledge and diversification, producer organization and certifications are crucial to understand how the SLUS can have an impact across large-scale landscapes and how the scaling out process can be set up best in order to be successful across different contexts. The analysis further reveals which key factors might affect SLUS efficiency.

Keywords: agroforestry, cocoa sector, Colombia, livestock sector, sustainable land use system

Procedia PDF Downloads 158
746 Integration of the Electro-Activation Technology for Soy Meal Valorization

Authors: Natela Gerliani, Mohammed Aider

Abstract:

Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.

Keywords: by-products, eco-friendly technology, electro-activation, soybean meal

Procedia PDF Downloads 226
745 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 58
744 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level

Procedia PDF Downloads 223
743 The Impact of Roof Thermal Performance on the Indoor Thermal Comfort in a Natural Ventilated Building Envelope in Hot Climatic Climates

Authors: J. Iwaro, A. Mwasha, K. Ramsubhag

Abstract:

Global warming has become a threat of our time. It poses challenges to the existence of beings on earth, the built environment, natural environment and has made a clear impact on the level of energy and water consumption. As such, increase in the ambient temperature increases indoor and outdoor temperature level of the buildings which brings about the use of more energy and mechanical air conditioning systems. In addition, in view of the increased modernization and economic growth in the developing countries, a significant amount of energy is being used, especially those with hot climatic conditions. Since modernization in developing countries is rising rapidly, more pressure is being placed on the buildings and energy resources to satisfy the indoor comfort requirements. This paper presents a sustainable passive roof solution as a means of reducing energy cooling loads for satisfying human comfort requirements in a hot climate. As such, the study based on the field study data discusses indoor thermal roof design strategies for a hot climate by investigating the impacts of roof thermal performance on indoor thermal comfort in naturally ventilated building envelope small scaled structures. In this respect, the traditional concrete flat roof, corrugated galvanised iron roof and pre-painted standing seam roof were used. The experiment made used of three identical small scale physical models constructed and sited on the roof of a building at the University of the West Indies. The results show that the utilization of insulation in traditional roofing systems will significantly reduce heat transfer between the internal and ambient environment, thus reducing the energy demand of the structure and the relative carbon footprint of a structure per unit area over its lifetime. Also, the application of flat slab concrete roofing system showed the best performance as opposed to the metal roof sheeting alternative systems. In addition, it has been shown experimentally through this study that a sustainable passive roof solution such as insulated flat concrete roof in hot dry climate has a better cooling strength that can provide building occupant with a better thermal comfort, conducive indoor conditions and energy efficiency.

Keywords: building envelope, roof, energy consumption, thermal comfort

Procedia PDF Downloads 269
742 A Critical Discourse Analysis of Corporate Annual Reports in a Cross-Cultural Perspective: Views from Grammatical Metaphor and Systemic Functional Linguistics

Authors: Antonio Piga

Abstract:

The study of language strategies in financial and corporate discourse has always been vital for understanding how companies manage to communicate effectively with a wider customer base and offers new perspectives on how companies interact with key stakeholders, not only to convey transparency and an image of trustworthiness, but also to create affiliation and attract investment. In the light of Systemic Functional Linguistics, the purpose of this study is to examine and analyse the annual reports of Asian and Western joint-stock companies involved in oil refining and power generation from the point of view of the functions and frequency of grammatical metaphors. More specifically, grammatical metaphor - through the lens of Critical Discourse Analysis (CDA) - is used as a theoretical tool for analysing a synchronic cross-cultural study of the communicative strategies adopted by Asian and Western companies to communicate social and environmental sustainability and showcase their ethical values, performance and competitiveness to local and global communities and key stakeholders. According to Systemic Functional Linguistics, grammatical metaphor can be divided into two broad areas: ideational and interpersonal. This study focuses on the first type, ideational grammatical metaphor (IGM), which includes de-adjectival and de-verbal nominalisation. The dominant and more effective grammatical tropes used by Asian and Western corporations in their annual reports were examined from both a qualitative and quantitative perspective. The aim was to categorise and explain how ideational grammatical metaphor is constructed cross-culturally and presented through structural language patterns involving re-mapping between semantics and lexico-grammatical features. The results show that although there seem to be more differences than similarities in terms of the categorisation of the ideational grammatical metaphors conceptualised in the two case studies analysed, there are more similarities than differences in terms of the occurrence, the congruence of process types and the role and function of IGM. Through the immediacy and essentialism of compacting and condensing information, IGM seems to be an important linguistic strategy adopted in the rhetoric of corporate annual reports, contributing to the ideologies and actions of companies to report and promote efficiency, profit and social and environmental sustainability, thus advocating the engagement and investment of key stakeholders.

Keywords: corporate annual reports, cross-cultural perspective, ideational grammatical metaphor, rhetoric, systemic functional linguistics

Procedia PDF Downloads 47
741 Advanced Separation Process of Hazardous Plastics and Metals from End-Of-Life Vehicles Shredder Residue by Nanoparticle Froth Flotation

Authors: Srinivasa Reddy Mallampati, Min Hee Park, Soo Mim Cho, Sung Hyeon Yoon

Abstract:

One of the issues of End of Life Vehicles (ELVs) recycling promotion is technology for the appropriate treatment of automotive shredder residue (ASR). Owing to its high heterogeneity and variable composition (plastic (23–41%), rubber/elastomers (9–21%), metals (6–13%), glass (10–20%) and dust (soil/sand) etc.), ASR can be classified as ‘hazardous waste’, on the basis of the presence of heavy metals (HMs), PCBs, BFRs, mineral oils, etc. Considering their relevant concentrations, these metals and plastics should be properly recovered for recycling purposes before ASR residues are disposed of. Brominated flame retardant additives in ABS/HIPS and PVC may generate dioxins and furans at elevated temperatures. Moreover, these BFRs additives present in plastic materials may leach into the environment during landfilling operations. ASR thermal process removes some of the organic material but concentrates, the heavy metals and POPs present in the ASR residues. In the present study, Fe/Ca/CaO nanoparticle assisted ozone treatment has been found to selectively hydrophilize the surface of ABS/HIPS and PVC plastics, enhancing its wettability and thereby promoting its separation from ASR plastics by means of froth flotation. The water contact angles, of ABS/HIPS and PVC decreased, about 18.7°, 18.3°, and 17.9° in ASR respectively. Under froth flotation conditions at 50 rpm, about 99.5% and 99.5% of HIPS in ASR samples sank, resulting in a purity of 98% and 99%. Furthermore, at 150 rpm a 100% PVC separation in the settled fraction, with 98% of purity in ASR, respectively. Total recovery of non-ABS/HIPS and PVC plastics reached nearly 100% in the floating fraction. This process improved the quality of recycled ASR plastics by removing surface contaminants or impurities. Further, a hybrid ball-milling and with Fe/Ca/CaO nanoparticle froth flotation process was established for the recovery of HMs from ASR. After ball-milling with Fe/Ca/CaO nanoparticle additives, the flotation efficiency increased to about 55 wt% and the HMs recovery were also increased about 90% for the 0.25 mm size fractions of ASR. Coating with Fe/Ca/CaO nanoparticles associated with subsequent microbubble froth flotation allowed the air bubbles to attach firmly on the HMs. SEM–EDS maps showed that the amounts of HMs were significant on the surface of the floating ASR fraction. This result, along with the low HM concentration in the settled fraction, was confirmed by elemental spectra and semi-quantitative SEM–EDS analysis. Developed hybrid preferential hazardous plastics and metals separation process from ASR is a simple, highly efficient, and sustainable procedure.

Keywords: end of life vehicles shredder residue, hazardous plastics, nanoparticle froth flotation, separation process

Procedia PDF Downloads 275
740 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method

Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon

Abstract:

The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.

Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue

Procedia PDF Downloads 248
739 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 66
738 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes

Authors: Chih Ming Tseng

Abstract:

Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.

Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation

Procedia PDF Downloads 26
737 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 236
736 Communication Skills Training in Continuing Nursing Education: Enabling Nurses to Improve Competency and Performance in Communication

Authors: Marzieh Moattari Mitra Abbasi, Masoud Mousavinasab, Poorahmad

Abstract:

Background: Nurses in their daily practice need to communicate with patients and their families as well as health professional team members. Effective communication contributes to patients’ satisfaction which is a fundamental outcome of nursing practice. There are some evidences in support of patients' dissatisfaction with nurses’ performance in communication process. Therefore improving nurses’ communication skills is a necessity for nursing scholars and nursing administrators. Objective: The aim of the present study was to evaluate the effect of a 2-days workshop on nurses’ competencies and performances in communication in a central hospital located in the sought of Iran. Materials and Method: This is a randomized controlled trial which comprised of a convenient sample of 70 eligible nurses, working in a central hospital. They were randomly divided into 2 experimental and control groups. Nurses’ competencies was measured by an Objective Structured Clinical Examination (OSCE) and their performance was measured by asking eligible patients hospitalized in the nurses work setting during a one month period to evaluate nurses' communication skills before and 2 months after intervention. The experimental group participated in a 2 day workshop on communication skills. Content included in this workshop were: the importance of communication (verbal and non verbal), basic communication skills such as initiating the communication, active listening and questioning technique. Other subjects were patient teaching, problem solving, and decision making, cross cultural communication and breaking bad news. Appropriate teaching strategies such as brief didactic sessions, small group discussion and reflection were applied to enhance participants learning. The data was analyzed using SPSS 16. Result: A significant between group differences was found in nurses’ communication skills competencies and performances in the posttest. The mean scores of the experimental group was higher than that of the control group in the total score of OSCE as well as all stations of OSCE (p<0.003). Overall posttest mean scores of patient satisfaction with nurse's communication skills and all of its four dimensions significantly differed between the two groups of the study (p<0.001). Conclusion: This study shows that the education of nurses in communication skills, improves their competencies and performances. Measurement of Nurses’ communication skills as a central component of efficient nurse patient relationship by valid and reliable methods of evaluation is recommended. Also it is necessary to integrate teaching of communication skills in continuing nursing education programs. Trial Registration Number: IRCT201204042621N11

Keywords: communication skills, simulation, performance, competency, objective structure, clinical evaluation

Procedia PDF Downloads 216
735 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously

Authors: S. Mehrab Amiri, Nasser Talebbeydokhti

Abstract:

Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme.  In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.

Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations

Procedia PDF Downloads 185
734 Genome-Wide Mining of Potential Guide RNAs for Streptococcus pyogenes and Neisseria meningitides CRISPR-Cas Systems for Genome Engineering

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated protein (Cas) system can facilitate targeted genome editing in organisms. Dual or single guide RNA (gRNA) can program the Cas9 nuclease to cut target DNA in particular areas; thus, introducing concise mutations either via error-prone non-homologous end-joining repairing or via incorporating foreign DNAs by homologous recombination between donor DNA and target area. In spite of high demand of such promising technology, developing a well-organized procedure in order for reliable mining of potential target sites for gRNAs in large genomic data is still challenging. Hence, we aimed to perform high-throughput detection of target sites by specific PAMs for not only common Streptococcus pyogenes (SpCas9) but also for Neisseria meningitides (NmCas9) CRISPR-Cas systems. Previous research confirmed the successful application of such RNA-guided Cas9 orthologs for effective gene targeting and subsequently genome manipulation. However, Cas9 orthologs need their particular PAM sequence for DNA cleavage activity. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of the target site for the two orthogonals of Cas9 protein, we created a reliable procedure to explore possible gRNA sequences. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. Finally, a complete list of all potential gRNAs along with their locations, strands, and PAMs sequence orientation can be provided for both SpCas9 as well as another potential Cas9 ortholog (NmCas9). The artificial design of potential gRNAs in a genome of interest can accelerate functional genomic studies. Consequently, the application of such novel genome editing tool (CRISPR/Cas technology) will enhance by presenting increased versatility and efficiency.

Keywords: CRISPR/Cas9 genome editing, gRNA mining, SpCas9, NmCas9

Procedia PDF Downloads 257
733 Spatial Direct Numerical Simulation of Instability Waves in Hypersonic Boundary Layers

Authors: Jayahar Sivasubramanian

Abstract:

Understanding laminar-turbulent transition process in hyper-sonic boundary layers is crucial for designing viable high speed flight vehicles. The study of transition becomes particularly important in the high speed regime due to the effect of transition on aerodynamic performance and heat transfer. However, even after many years of research, the transition process in hyper-sonic boundary layers is still not understood. This lack of understanding of the physics of the transition process is a major impediment to the development of reliable transition prediction methods. Towards this end, spatial Direct Numerical Simulations are conducted to investigate the instability waves generated by a localized disturbance in a hyper-sonic flat plate boundary layer. In order to model a natural transition scenario, the boundary layer was forced by a short duration (localized) pulse through a hole on the surface of the flat plate. The pulse disturbance developed into a three-dimensional instability wave packet which consisted of a wide range of disturbance frequencies and wave numbers. First, the linear development of the wave packet was studied by forcing the flow with low amplitude (0.001% of the free-stream velocity). The dominant waves within the resulting wave packet were identified as two-dimensional second mode disturbance waves. Hence the wall-pressure disturbance spectrum exhibited a maximum at the span wise mode number k = 0. The spectrum broadened in downstream direction and the lower frequency first mode oblique waves were also identified in the spectrum. However, the peak amplitude remained at k = 0 which shifted to lower frequencies in the downstream direction. In order to investigate the nonlinear transition regime, the flow was forced with a higher amplitude disturbance (5% of the free-stream velocity). The developing wave packet grows linearly at first before reaching the nonlinear regime. The wall pressure disturbance spectrum confirmed that the wave packet developed linearly at first. The response of the flow to the high amplitude pulse disturbance indicated the presence of a fundamental resonance mechanism. Lower amplitude secondary peaks were also identified in the disturbance wave spectrum at approximately half the frequency of the high amplitude frequency band, which would be an indication of a sub-harmonic resonance mechanism. The disturbance spectrum indicates, however, that fundamental resonance is much stronger than sub-harmonic resonance.

Keywords: boundary layer, DNS, hyper sonic flow, instability waves, wave packet

Procedia PDF Downloads 182
732 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics

Authors: M. Khorshed Alam, H. Takaba

Abstract:

The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.

Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo

Procedia PDF Downloads 191
731 Achieving Net Zero Energy Building in a Hot Climate Using Integrated Photovoltaic and Parabolic Trough Collectors

Authors: Adel A. Ghoneim

Abstract:

In most existing buildings in hot climate, cooling loads lead to high primary energy consumption and consequently high CO2 emissions. These can be substantially decreased with integrated renewable energy systems. Kuwait is characterized by its dry hot long summer and short warm winter. Kuwait receives annual total radiation more than 5280 MJ/m2 with approximately 3347 h of sunshine. Solar energy systems consist of PV modules and parabolic trough collectors are considered to satisfy electricity consumption, domestic water heating, and cooling loads of an existing building. This paper presents the results of an extensive program of energy conservation and energy generation using integrated photovoltaic (PV) modules and parabolic trough collectors (PTC). The program conducted on an existing institutional building intending to convert it into a Net-Zero Energy Building (NZEB) or near net Zero Energy Building (nNZEB). The program consists of two phases; the first phase is concerned with energy auditing and energy conservation measures at minimum cost and the second phase considers the installation of photovoltaic modules and parabolic trough collectors. The 2-storey building under consideration is the Applied Sciences Department at the College of Technological Studies, Kuwait. Single effect lithium bromide water absorption chillers are implemented to provide air conditioning load to the building. A numerical model is developed to evaluate the performance of parabolic trough collectors in Kuwait climate. Transient simulation program (TRNSYS) is adapted to simulate the performance of different solar system components. In addition, a numerical model is developed to assess the environmental impacts of building integrated renewable energy systems. Results indicate that efficient energy conservation can play an important role in converting the existing buildings into NZEBs as it saves a significant portion of annual energy consumption of the building. The first phase results in an energy conservation of about 28% of the building consumption. In the second phase, the integrated PV completely covers the lighting and equipment loads of the building. On the other hand, parabolic trough collectors of optimum area of 765 m2 can satisfy a significant portion of the cooling load, i.e about73% of the total building cooling load. The annual avoided CO2 emission is evaluated at the optimum conditions to assess the environmental impacts of renewable energy systems. The total annual avoided CO2 emission is about 680 metric ton/year which confirms the environmental impacts of these systems in Kuwait.

Keywords: building integrated renewable systems, Net-Zero energy building, solar fraction, avoided CO2 emission

Procedia PDF Downloads 608
730 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 63
729 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs

Authors: Szymon Kuczynski

Abstract:

Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.

Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas

Procedia PDF Downloads 336
728 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 123
727 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 412