Search results for: urban green space network
805 Application of the Building Information Modeling Planning Approach to the Factory Planning
Authors: Peggy Näser
Abstract:
Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.Keywords: building information modeling, digital factory, digital planning, factory planning
Procedia PDF Downloads 264804 A Double-Blind, Randomized, Controlled Trial on N-Acetylcysteine for the Prevention of Acute Kidney Injury in Patients Undergoing Allogeneic Hematopoietic Stem Cell Transplantation
Authors: Sara Ataei, Molouk Hadjibabaie, Amirhossein Moslehi, Maryam Taghizadeh-Ghehi, Asieh Ashouri, Elham Amini, Kheirollah Gholami, Alireza Hayatshahi, Mohammad Vaezi, Ardeshir Ghavamzadeh
Abstract:
Acute kidney injury (AKI) is one of the complications of hematopoietic stem cell transplantation and is associated with increased mortality. N-acetylcysteine (NAC) is a thiol compound with antioxidant and vasodilatory properties that has been investigated for the prevention of AKI in several clinical settings. In the present study, we evaluated the effects of intravenous NAC on the prevention of AKI in allogeneic hematopoietic stem cell transplantation patients. A double-blind randomized placebo-controlled trial was conducted, and 80 patients were recruited to receive 100 mg/kg/day NAC or placebo as intermittent intravenous infusion from day -6 to day +15. AKI was determined on the basis of the Risk-Injury-Failure-Loss-Endstage renal disease and AKI Network criteria as the primary outcome. We assessed urine neutrophil gelatinase-associated lipocalin (uNGAL) on days -6, -3, +3, +9, and +15 as the secondary outcome. Moreover, transplant-related outcomes and NAC adverse reactions were evaluated during the study period. Statistical analysis was performed using appropriate parametric and non-parametric methods including Kaplan–Meier for AKI and generalized estimating equation for uNGAL. At the end of the trial, data from 72 patients were analyzed (NAC: 33 patients and placebo: 39 patients). Participants of each group were not different considering baseline characteristics. AKI was observed in 18% of NAC recipients and 15% of placebo group patients, and the occurrence pattern was not significantly different (p = 0.73). Moreover, no significant difference was observed between groups for uNGAL measures (p = 0.10). Transplant-related outcomes were similar for both groups, and all patients had successful engraftment. Three patients did not tolerate NAC because of abdominal pain, shortness of breath and rash with pruritus and were dropped from the intervention group before transplantation. However, the frequency of adverse reactions was not significantly different between groups. In conclusion, our findings could not show any clinical benefits from high-dose NAC particularly for AKI prevention in allogeneic hematopoietic stem cell transplantation patients.Keywords: acute kidney injury, N-acetylcysteine, hematopoietic stem cell transplantation, urine neutrophil gelatinase-associated lipocalin, randomized controlled trial
Procedia PDF Downloads 433803 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus
Authors: Debamitra Chakravorty, Pratap K. Parida
Abstract:
Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design
Procedia PDF Downloads 136802 Effect of Pollutions on Mangrove Forests of Nayband National Marine Park
Authors: Esmaeil Kouhgardi, Elaheh Shakerdargah
Abstract:
The mangrove ecosystem is a complex of various inter-related elements in the land-sea interface zone which is linked with other natural systems of the coastal region such as corals, sea-grass, coastal fisheries and beach vegetation. The mangrove ecosystem consists of water, muddy soil, trees, shrubs, and their associated flora, fauna and microbes. It is a very productive ecosystem sustaining various forms of life. Its waters are nursery grounds for fish, crustacean, and mollusk and also provide habitat for a wide range of aquatic life, while the land supports a rich and diverse flora and fauna, but pollutions may affect these characteristics. Iran has the lowest share of Persian Gulf pollution among the eight littoral states; environmental experts are still deeply concerned about the serious consequences of the pollution in the oil-rich gulf. Prolongation of critical conditions in the Persian Gulf has endangered its aquatic ecosystem. Water purification equipment, refineries, wastewater emitted by onshore installations, especially petrochemical plans, urban sewage, population density and extensive oil operations of Arab states are factors contaminating the Persian Gulf waters. Population density has been the major cause of pollution and environmental degradation in the Persian Gulf. Persian Gulf is a closed marine environment which is connected to open waterways only from one way. It usually takes between three and four years for the gulf's water to be completely replaced. Therefore, any pollution entering the water will remain there for a relatively long time. Presently, the high temperature and excessive salt level in the water have exposed the marine creatures to extra threats, which mean they have to survive very tough conditions. The natural environment of the Persian Gulf is very rich with good fish grounds, extensive coral reefs and pearl oysters in abundance, but has become increasingly under pressure due to the heavy industrialization and in particular the repeated major oil spillages associated with the various recent wars fought in the region. Pollution may cause the mortality of mangrove forests by effect on root, leaf and soil of the area. Study was showed the high correlation between industrial pollution and mangrove forests health in south of Iran and increase of population, coupled with economic growth, inevitably caused the use of mangrove lands for various purposes such as construction of roads, ports and harbors, industries and urbanization.Keywords: Mangrove forest, pollution, Persian Gulf, population, environment
Procedia PDF Downloads 398801 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery
Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats
Abstract:
Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform
Procedia PDF Downloads 454800 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 92799 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast
Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi
Abstract:
The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education
Procedia PDF Downloads 172798 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 157797 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 147796 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India
Authors: Senthil Kumar Anantharaman
Abstract:
Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints
Procedia PDF Downloads 163795 The Study of Adsorption of RuP onto TiO₂ (110) Surface Using Photoemission Deposited by Electrospray
Authors: Tahani Mashikhi
Abstract:
Countries worldwide rely on electric power as a critical economic growth and progress factor. Renewable energy sources, often referred to as alternative energy sources, such as wind, solar energy, geothermal energy, biomass, and hydropower, have garnered significant interest in response to the rising consumption of fossil fuels. Dye-sensitized solar cells (DSSCs) are a highly promising alternative for energy production as they possess numerous advantages compared to traditional silicon solar cells and thin-film solar cells. These include their low cost, high flexibility, straightforward preparation methodology, ease of production, low toxicity, different colors, semi-transparent quality, and high power conversion efficiency. A solar cell, also known as a photovoltaic cell, is a device that converts the energy of light from the sun into electrical energy through the photovoltaic effect. The Gratzel cell is the initial dye-sensitized solar cell made from colloidal titanium dioxide. The operational mechanism of DSSCs relies on various key elements, such as a layer composed of wide band gap semiconducting oxide materials (e.g. titanium dioxide [TiO₂]), as well as a photosensitizer or dye that absorbs sunlight to inject electrons into the conduction band, the electrolyte utilizes the triiodide/iodide redox pair (I− /I₃−) to regenerate dye molecules and a counter electrode made of carbon or platinum facilitates the movement of electrons across the circuit. Electrospray deposition permits the deposition of fragile, non-volatile molecules in a vacuum environment, including dye sensitizers, complex molecules, nanoparticles, and biomolecules. Surface science techniques, particularly X-ray photoelectron spectroscopy, are employed to examine dye-sensitized solar cells. This study investigates the possible application of electrospray deposition to build high-quality layers in situ in a vacuum. Two distinct categories of dyes can be employed as sensitizers in DSSCs: organometallic semiconductor sensitizers and purely organic dyes. Most organometallic dyes, including Ru533, RuC, and RuP, contain a ruthenium atom, which is a rare element. This ruthenium atom enhances the efficiency of dye-sensitized solar cells (DSSCs). These dyes are characterized by their high cost and typically appear as dark purple powders. On the other hand, organic dyes, such as SQ2, RK1, D5, SC4, and R6, exhibit reduced efficacy due to the lack of a ruthenium atom. These dyes appear in green, red, orange, and blue powder-colored. This study will specifically concentrate on metal-organic dyes. The adsorption of dye molecules onto the rutile TiO₂ (110) surface has been deposited in situ under ultra-high vacuum conditions by combining an electrospray deposition method with X-ray photoelectron spectroscopy. The X-ray photoelectron spectroscopy (XPS) technique examines chemical bonds and interactions between molecules and TiO₂ surfaces. The dyes were deposited at varying times, from 5 minutes to 40 minutes, to achieve distinct layers of coverage categorized as sub-monolayer, monolayer, few layers, or multilayer. Based on the O 1s photoelectron spectra data, it can be observed that the monolayer establishes a strong chemical bond with the Ti atoms of the oxide substrate by deprotonating the carboxylic acid groups through 2M-bidentate bridging anchors. The C 1s and N 1s photoelectron spectra indicate that the molecule remains intact at the surface. This can be due to the existence of all functional groups and a ruthenium atom, where the binding energy of Ru 3d is consistent with Ru2+.Keywords: deposit, dye, electrospray, TiO₂, XPS
Procedia PDF Downloads 43794 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin
Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup
Procedia PDF Downloads 147793 Emerging VC Industry and the Important Role of Marketing Expectations in Project Selection: Evidence on Russian Data
Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova
Abstract:
Currently, the venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating prosperity of the modern economy. Actually, in Russia there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. Because of the research, the participation of investors with first-class reputation has a small impact on an indicator of the value of investment of the second round. The expected positive dependence of the second round investments on the forecasted market growth rate now of the deal is also rejected. So, the most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams which can attract more money on the start, and the target market growth is not the factor of crucial importance.Keywords: venture industry, venture investment, determinants of the venture sector development, IT-sector
Procedia PDF Downloads 352792 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 222791 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 487790 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 189789 Gains and Pitfalls of Participating on International Staff Exchange Programs: Individual Experiences of Academic Staff of Makerere University, Uganda
Authors: David Onen
Abstract:
Staff exchanges amongst different work organizations are a growing international phenomenon. In higher education in particular, it is not only the staff participating on international exchange programs, but their students as well. The practice of exchanging staff is premised on the belief that participating members of staff would not only get the chance to network with colleagues from partner institutions but also gain the opportunity for knowledge sharing and skills development. As a result, it would not only benefit the participating individual staff but their institutions too. However, in practice, staff exchange programs everywhere are not all ‘a bed of roses’. In fact, some of the programs seem to be laden with unapparent source of trouble or danger for the participating staff. This paper is a report on an on-going study investigating the experiences of members of academic staff of Makerere University in Uganda who have ever participated on international staff exchange programs. The study is aimed at documenting individual experiences in order to stimulate, not only a debate, but practical ways of enriching the experiences of staff who engage on well-meant international staff exchange programs. The study has employed an exploratory survey research design in which self-administered questionnaire and interview guide are being used to collect data from university academic staff respondents selected through snow-ball and purposive sampling techniques. Data have been analysed with the use of appropriate descriptive and inferential statistics as well as content analysis techniques. Preliminary study findings reveal that the majority of the respondents (95.5%) were, to a large extent, fully satisfied with their participation on the staff exchange programs. Many attested to gaining new experience (97%), networking (75%), gaining new knowledge (94%), acquiring new skills (88%), and therefore bringing to their institutions something ‘new’ and ‘beneficial’. However, a reasonably large percentage (57%) of the participants too expressed dissatisfaction in the institutional support that Makerere University gave them during their participation on the exchange programs. Some respondents reported about the ‘unfriendly welcome’ they received upon returning ‘home’ because colleagues detested how they were chosen to participate on such programs. The researcher thus concluded that international staff exchange programs are truly beneficial to both the participating staff and their institutions though with pitfalls. The researcher thus recommended for mutual and preferably equal engagement of the participating institutions on staff exchange programs if such programs are to benefit both the participating staff and institutions. Besides, exchange programs require clear terms of cooperation including on how staff are selected, facilitated and what are expected of the sending and host institutions as well as the concerned staff.Keywords: gains, exchange programs, higher education, pitfalls
Procedia PDF Downloads 343788 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 56787 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies
Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid
Abstract:
Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.Keywords: climate, renewable energy, R strategies, sustainability
Procedia PDF Downloads 135786 RNA-Seq Analysis of the Wild Barley (H. spontaneum) Leaf Transcriptome under Salt Stress
Authors: Ahmed Bahieldin, Ahmed Atef, Jamal S. M. Sabir, Nour O. Gadalla, Sherif Edris, Ahmed M. Alzohairy, Nezar A. Radhwan, Mohammed N. Baeshen, Ahmed M. Ramadan, Hala F. Eissa, Sabah M. Hassan, Nabih A. Baeshen, Osama Abuzinadah, Magdy A. Al-Kordy, Fotouh M. El-Domyati, Robert K. Jansen
Abstract:
Wild salt-tolerant barley (Hordeum spontaneum) is the ancestor of cultivated barley (Hordeum vulgare or H. vulgare). Although the cultivated barley genome is well studied, little is known about genome structure and function of its wild ancestor. In the present study, RNA-Seq analysis was performed on young leaves of wild barley treated with salt (500 mM NaCl) at four different time intervals. Transcriptome sequencing yielded 103 to 115 million reads for all replicates of each treatment, corresponding to over 10 billion nucleotides per sample. Of the total reads, between 74.8 and 80.3% could be mapped and 77.4 to 81.7% of the transcripts were found in the H. vulgare unigene database (unigene-mapped). The unmapped wild barley reads for all treatments and replicates were assembled de novo and the resulting contigs were used as a new reference genome. This resultedin94.3 to 95.3%oftheunmapped reads mapping to the new reference. The number of differentially expressed transcripts was 9277, 3861 of which were uni gene-mapped. The annotated unigene- and de novo-mapped transcripts (5100) were utilized to generate expression clusters across time of salt stress treatment. Two-dimensional hierarchical clustering classified differential expression profiles into nine expression clusters, four of which were selected for further analysis. Differentially expressed transcripts were assigned to the main functional categories. The most important groups were ‘response to external stimulus’ and ‘electron-carrier activity’. Highly expressed transcripts are involved in several biological processes, including electron transport and exchanger mechanisms, flavonoid biosynthesis, reactive oxygen species (ROS) scavenging, ethylene production, signaling network and protein refolding. The comparisons demonstrated that mRNA-Seq is an efficient method for the analysis of differentially expressed genes and biological processes under salt stress.Keywords: electron transport, flavonoid biosynthesis, reactive oxygen species, rnaseq
Procedia PDF Downloads 391785 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context
Authors: Andrea Fiorista
Abstract:
The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL
Procedia PDF Downloads 86784 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 32783 Listening Children Through Storytelling
Authors: Catarina Cruz, Ana Breda
Abstract:
In the early years, until the children’s entrance at the elementary school, they are stimulated by their educators, through rich and attractive contexts, to explore and develop skills in different domains, from the socio-emotional to the cognitive. Many of these contexts trigger real or imaginary situations, familiar or not, through resources or pedagogical practices that incite children's curiosity, questioning, expression of ideas or emotions, social interaction, among others. Later, when children enter at the elementary school, their activity at school becomes more focused on developing skills in the cognitive domain, namely acquiring learning from different subject areas, such as Mathematics, Natural Sciences, History, among others. That is, to ensure that children develop the standardized learning recommended in the guiding curriculum documents, they spend part of their time applying formulas, memorizing information, following instructions, and so on, and in this way not much time is left to listen children, to learn about their interests and likes, as well as their perspective and questions about the surround world. In Elementary School, especially in the 1st Cycle, children are naturally curious, however, sometimes this skill is subtly conditioned by adults. Curious children learn more, since they have an intrinsic desire to know more, especially about what is unknown. When children think on subjects or themes that they are interested in or curious about, they attribute more meaning to this learning and retain it for longer. Therefore, it is important to approach subjects in the classroom that seduce or captivate children's attention, trigger them curiosity, and allow to hear their ideas. There are several resources, strategies and pedagogical practices to awaken children's curiosity, to explore their knowledge, to understand their perspectives and their way of thinking, to know a little more about their personality and to provide space for dialogue. The storytelling, its narrative’s exploration and interpretation is one of those pedagogical practices. Children’s literature, about real or imaginary subjects, stimulate children’s insights supported into their experiences, emotions, learnings and personality, and promote opportunities for children express freely their feelings and thoughts. This work focuses on a session developed with children in the 3rd year of schooling, from a Portuguese 1st Cycle Basic School, in which the story "From the Outside In and From the Inside Out" was presented. The story’s presentation was mainly centred on children’s activity, who read excerpts and interpreted/explored them through a dialogue led by one of the authors. The study presented here intends to show an example of how an exploration of a children's story can trigger ideas, thoughts, emotions or attitudes in children in the 3rd year of elementary school. To answer the research question, this work aimed to: identify ideas, thoughts, emotions or attitudes that emerged from the exploration of story; analyse aspects of the story and the orchestration/conduction of dialogue with/between children that facilitated or inhibited the emergence of ideas, thoughts, emotions or attitudes by children,Keywords: storytelling, children’s perspectives, soft skills, non-formal learning contexts, orchestration
Procedia PDF Downloads 21782 A Comparative Study of Environmental, Social and Economic Cross-Border Cooperation in Post-Conflict Environments: The Israel-Jordan Border
Authors: Tamar Arieli
Abstract:
Cross-border cooperation has long been hailed as a means for stabilizing and normalizing relations between former enemies. Cooperation in problem-solving and realizing of local interests in post-conflict environments can indeed serve as a basis for developing dialogue and meaningful relations between neighbors across borders. Hence the potential for formerly sealed borders to serve as a basis for generating local and national perceptions of interdependence and as a buffer against the resume of conflict. Central questions which arise for policy-makers and third parties are how to facilitate cross-border cooperation and which areas of cooperation best serve to normalize post-conflict border regions. The Israel-Jordan border functions as a post-conflict border, in that it is a peaceful border since the 1994 Israel-Jordan peace treaty yet cross-border relations are defined but the highly securitized nature of the border region and the ongoing Arab-Israel regional conflict. This case study is based on long term qualitative research carried out in the border regions of both Israel and Jordan, which mapped and analyzed cross-border in a wide range of activities – social interactions sponsored by peace-facilitating NGOs, government sponsored agricultural cooperation, municipal initiated emergency planning in cross-border continuous urban settings, private cross-border business ventures and various environmental cooperative initiatives. These cooperative initiatives are evaluated through multiple interviews carried out with initiators and partners in cross-border cooperation as well as analysis of documentation, funding and media. These cooperative interactions are compared based on levels of cross-border local and official awareness and involvement as well as sustainability over time. This research identifies environmental cooperation as the most sustainable area of cross- border cooperation and as most conducive to generating perceptions of regional interdependence. This is a variation to the ‘New Middle East’ vision of business-based cooperation leading to conflict amelioration and regional stability. Environmental cooperation serving the public good rather than personal profit enjoys social legitimization even in the face of widespread anti-normalization sentiments common in the post-conflict environment. This insight is examined in light of philosophical and social aspects of the natural environment and its social perceptions. This research has theoretical implications for better understanding dynamics of cooperation and conflict, as well as practical ramifications for practitioners in border region policy and management.Keywords: borders, cooperation, post-conflict, security
Procedia PDF Downloads 314781 An Ecological Approach to Understanding Student Absenteeism in a Suburban, Kansas School
Authors: Andrew Kipp
Abstract:
Student absenteeism is harmful to both the school and the absentee student. One approach to improving student absenteeism is targeting contextual factors within the students’ learning environment. However, contemporary literature has not taken an ecological agency approach to understanding student absenteeism. Ecological agency is a theoretical framework that magnifies the interplay between the environment and the actions of people within the environment. To elaborate, the person’s personal history and aspirations and the environmental conditions provide potential outlets or restrictions to their intended action. The framework provides the unique perspective of understanding absentee students’ decision-making through the affordances and constraints found in their learning environment. To that effect, the study was guided by the question, “Why do absentee students decide to engage in absenteeism in a suburban Kansas school?” A case study methodology was used to answer the research question. Four suburban, Kansas high school absentee students in the 2020-2021 school year were selected for the study. The fall 2020 semester was in a remote learning setting, and the spring 2021 semester was in an in-person learning setting. The study captured their decision-making with respect to school attendance throughsemi-structured interviews, prolonged observations, drawings, and concept maps. The data was analyzed through thematic analysis. The findings revealed that peer socialization opportunities, methods of instruction, shifts in cultural beliefs due to COVID-19, manifestations of anxiety and lack of space to escape their anxiety, social media bullying, and the inability to receive academic tutoring motivated the participants’ daily decision to either attend or miss school. The findings provided a basis to improve several institutional and classroom practices. These practices included more student-led instruction and less teacher-led instruction in both in-person and remote learning environments, promoting socialization through classroom collaboration and clubs based on emerging student interests, reducing instances of bullying through prosocial education, safe spaces for students to escape the classroom to manage their anxiety, and more opportunities for one-on-one tutoring to improve grades. The study provides an example of using the ecological agency approach to better understand the personal and environmental factors that lead to absenteeism. The study also informs educational policies and classroom practices to better promote student attendance. Further research should investigate other school contexts using the ecological agency theoretical framework to better understand the influence of the school environment on student absenteeism.Keywords: student absenteeism, ecological agency, classroom practices, educational policy, student decision-making
Procedia PDF Downloads 141780 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 84779 Cumulative Pressure Hotspot Assessment in the Red Sea and Arabian Gulf
Authors: Schröde C., Rodriguez D., Sánchez A., Abdul Malak, Churchill J., Boksmati T., Alharbi, Alsulmi H., Maghrabi S., Mowalad, Mutwalli R., Abualnaja Y.
Abstract:
Formulating a strategy for sustainable development of the Kingdom of Saudi Arabia’s coastal and marine environment is at the core of the “Marine and Coastal Protection Assessment Study for the Kingdom of Saudi Arabia Coastline (MCEP)”; that was set up in the context of the Vision 2030 by the Saudi Arabian government and aimed at providing a first comprehensive ‘Status Quo Assessment’ of the Kingdom’s marine environment to inform a sustainable development strategy and serve as a baseline assessment for future monitoring activities. This baseline assessment relied on scientific evidence of the drivers, pressures and their impact on the environments of the Red Sea and Arabian Gulf. A key element of the assessment was the cumulative pressure hotspot analysis developed for both national waters of the Kingdom following the principles of the Driver-Pressure-State-Impact-Response (DPSIR) framework and using the cumulative pressure and impact assessment methodology. The ultimate goals of the analysis were to map and assess the main hotspots of environmental pressures, and identify priority areas for further field surveillance and for urgent management actions. The study identified maritime transport, fisheries, aquaculture, oil, gas, energy, coastal industry, coastal and maritime tourism, and urban development as the main drivers of pollution in the Saudi Arabian marine waters. For each of these drivers, pressure indicators were defined to spatially assess the potential influence of the drivers on the coastal and marine environment. A list of hotspots of 90 locations could be identified based on the assessment. Spatially grouped the list could be reduced to come up with of 10 hotspot areas, two in the Arabian Gulf, 8 in the Red Sea. The hotspot mapping revealed clear spatial patterns of drivers, pressures and hotspots within the marine environment of waters under KSA’s maritime jurisdiction in the Red Sea and Arabian Gulf. The cascading assessment approach based on the DPSIR framework ensured that the root causes of the hotspot patterns, i.e. the human activities and other drivers, can be identified. The adapted CPIA methodology allowed for the combination of the available data to spatially assess the cumulative pressure in a consistent manner, and to identify the most critical hotspots by determining the overlap of cumulative pressure with areas of sensitive biodiversity. Further improvements are expected by enhancing the data sources of drivers and pressure indicators, fine-tuning the decay factors and distances of the pressure indicators, as well as including trans-boundary pressures across the regional seas.Keywords: Arabian Gulf, DPSIR, hotspot, red sea
Procedia PDF Downloads 138778 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 393777 How Does Paradoxical Leadership Enhance Organizational Success?
Authors: Wageeh A. Nafei
Abstract:
This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.Keywords: paradoxical leadership, organizational success, human resourece, management
Procedia PDF Downloads 57776 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 69