Search results for: equivalent circuit models
4699 The Optimal Irrigation in the Mitidja Plain
Authors: Gherbi Khadidja
Abstract:
In the Mediterranean region, water resources are limited and very unevenly distributed in space and time. The main objective of this project is the development of a wireless network for the management of water resources in northern Algeria, the Mitidja plain, which helps farmers to irrigate in the most optimized way and solve the problem of water shortage in the region. Therefore, we will develop an aid tool that can modernize and replace some traditional techniques, according to the real needs of the crops and according to the soil conditions as well as the climatic conditions (soil moisture, precipitation, characteristics of the unsaturated zone), These data are collected in real-time by sensors and analyzed by an algorithm and displayed on a mobile application and the website. The results are essential information and alerts with recommendations for action to farmers to ensure the sustainability of the agricultural sector under water shortage conditions. In the first part: We want to set up a wireless sensor network, for precise management of water resources, by presenting another type of equipment that allows us to measure the water content of the soil, such as the Watermark probe connected to the sensor via the acquisition card and an Arduino Uno, which allows collecting the captured data and then program them transmitted via a GSM module that will send these data to a web site and store them in a database for a later study. In a second part: We want to display the results on a website or a mobile application using the database to remotely manage our smart irrigation system, which allows the farmer to use this technology and offers the possibility to the growers to access remotely via wireless communication to see the field conditions and the irrigation operation, at home or at the office. The tool to be developed will be based on satellite imagery as regards land use and soil moisture. These tools will make it possible to follow the evolution of the needs of the cultures in time, but also to time, and also to predict the impact on water resources. According to the references consulted, if such a tool is used, it can reduce irrigation volumes by up to up to 40%, which represents more than 100 million m3 of savings per year for the Mitidja. This volume is equivalent to a medium-size dam.Keywords: optimal irrigation, soil moisture, smart irrigation, water management
Procedia PDF Downloads 1094698 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique
Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello
Abstract:
The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation
Procedia PDF Downloads 1994697 Applications of Greenhouse Data in Guatemala in the Analysis of Sustainability Indicators
Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.
Abstract:
In 2015, Guatemala officially adopted the Sustainable Development Goals (SDG) according to the 2030 Agenda agreed by the United Nations Organization. In 2016, these objectives and goals were reviewed, and the National Priorities were established within the K'atún 2032 National Development Plan. In 2019 and 2021, progress was evaluated with 120 defined indicators, and the need to improve quality and availability of statistical data necessary for the analysis of sustainability indicators was detected, so the values to be reached in 2024 and 2032 were adjusted. The need for greater agricultural technology is one of the priorities established within SDG 2 "Zero Hunger". Within this area, protected agricultural production provides greater productivity throughout the year, reduces the use of chemical products to control pests and diseases, reduces the negative impact of climate and improves product quality. During the crisis caused by Covid-19, there was an increase in exports of fruits and vegetables produced in greenhouses from Guatemala. However, this information has not been considered in the 2021 revision of the Plan. The objective of this study is to evaluate the information available on Greenhouse Agricultural Production and its integration into the Sustainability Indicators for Guatemala. This study was carried out in four phases: 1. Analysis of the Goals established for SDG 2 and the indicators included in the K'atún Plan. 2. Analysis of Environmental, Social and Economic Indicator Models. 3. Definition of territorial levels in 2 geographic scales: Departments and Municipalities. 4. Diagnosis of the available data on technological agricultural production with emphasis on Greenhouses at the 2 geographical scales. A summary of the results is presented for each phase and finally some recommendations for future research are added. The main contribution of this work is to improve the available data that allow the incorporation of some agricultural technology indicators in the established goals, to evaluate their impact on Food Security and Nutrition, Employment and Investment, Poverty, the use of Water and Natural Resources, and to provide a methodology applicable to other production models and other geographical areas.Keywords: greenhouses, protected agriculture, sustainable indicators, Guatemala, sustainability, SDG
Procedia PDF Downloads 854696 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability
Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli
Abstract:
Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.Keywords: agriculture 4.0, agri-food suppy chain, industry 4.0, voluntary traceability
Procedia PDF Downloads 1474695 Development of Transmission and Packaging for Parallel Hybrid Light Commercial Vehicle
Authors: Vivek Thorat, Suhasini Desai
Abstract:
The hybrid electric vehicle is widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and low emissions at competitive costs. Retro fitment of hybrid components into a conventional vehicle for achieving better performance is the best solution so far. But retro fitment includes major modifications into a conventional vehicle with a high cost. This paper focuses on the development of a P3x hybrid prototype with rear wheel drive parallel hybrid electric Light Commercial Vehicle (LCV) with minimum and low-cost modifications. This diesel Hybrid LCV is different from another hybrid with regard to the powertrain. The additional powertrain consists of continuous contact helical gear pair followed by chain and sprocket as a coupler for traction motor. Vehicle powertrain which is designed for the intended high-speed application. This work focuses on targeting of design, development, and packaging of this unique parallel diesel-electric vehicle which is based on multimode hybrid advantages. To demonstrate the practical applicability of this transmission with P3x hybrid configuration, one concept prototype vehicle has been build integrating the transmission. The hybrid system makes it easy to retrofit existing vehicle because the changes required into the vehicle chassis are a minimum. The additional system is designed for mainly five modes of operations which are engine only mode, electric-only mode, hybrid power mode, engine charging battery mode and regenerative braking mode. Its driving performance, fuel economy and emissions are measured and results are analyzed over a given drive cycle. Finally, the output results which are achieved by the first vehicle prototype during experimental testing is carried out on a chassis dynamometer using MIDC driving cycle. The results showed that the prototype hybrid vehicle is about 27% faster than the equivalent conventional vehicle. The fuel economy is increased by 20-25% approximately compared to the conventional powertrain.Keywords: P3x configuration, LCV, hybrid electric vehicle, ROMAX, transmission
Procedia PDF Downloads 2554694 A Study on Reinforced Concrete Beams Enlarged with Polymer Mortar and UHPFRC
Authors: Ga Ye Kim, Hee Sun Kim, Yeong Soo Shin
Abstract:
Many studies have been done on the repair and strengthening method of concrete structure, so far. The traditional retrofit method was to attach fiber sheet such as CFRP (Carbon Fiber Reinforced Polymer), GFRP (Glass Fiber Reinforced Polymer) and AFRP (Aramid Fiber Reinforced Polymer) on the concrete structure. However, this method had many downsides in that there are a risk of debonding and an increase in displacement by a shortage of structure section. Therefore, it is effective way to enlarge the structural member with polymer mortar or Ultra-High Performance Fiber Reinforced Concrete (UHPFRC) as a means of strengthening concrete structure. This paper intends to investigate structural performance of reinforced concrete (RC) beams enlarged with polymer mortar and compare the experimental results with analytical results. Nonlinear finite element analyses were conducted to compare the experimental results and predict structural behavior of retrofitted RC beams accurately without cost consuming experimental process. In addition, this study aims at comparing differences of retrofit material between commonly used material (polymer mortar) and recently used material (UHPFRC) by conducting nonlinear finite element analyses. In the first part of this paper, the RC beams having different cover type were fabricated for the experiment and the size of RC beams was 250 millimeters in depth, 150 millimeters in width and 2800 millimeters in length. To verify the experiment, nonlinear finite element models were generated using commercial software ABAQUS 6.10-3. From this study, both experimental and analytical results demonstrated good strengthening effect on RC beam and showed similar tendency. For the future, the proposed analytical method can be used to predict the effect of strengthened RC beam. In the second part of the study, the main parameters were type of retrofit materials. The same nonlinear finite element models were generated to compare the polymer mortar with UHPFRCC. Two types of retrofit material were evaluated and retrofit effect was verified by analytical results.Keywords: retrofit material, polymer mortar, UHPFRC, nonlinear finite element analysis
Procedia PDF Downloads 4184693 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 2464692 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 894691 Cognitive Models of Health Marketing Communication in the Digital Era: Psychological Factors, Challenges, and Implications
Authors: Panas Gerasimos, Kotidou Varvara, Halkiopoulos Constantinos, Gkintoni Evgenia
Abstract:
As a result of growing technology and briefing by the internet, users resort to the internet and subsequently to the opinion of an expert. In many cases, they take control of their health in their hand and make a decision without the contribution of a doctor. According to that, this essay intends to analyze the confidence of searching health issues on the internet. For the fulfillment of this study, there has been a survey among doctors in order to find out the reasons a patient uses the internet about their health problems and the consequences that health information could lead by searching on the internet, as well. Specifically, the results regarding the research of the users demonstrate: a) the majority of users make use of the internet about health issues once or twice a month, b) individuals that possess chronic disease make health search on the internet more frequently, c) the most important topics that the majority of users usually search are pathological, dietary issues and the search of issues that are associated with doctors and hospitals. However, it observed that topic search varies depending on the users’ age, d) the most common sources of information concern the direct contact with doctors, as there is a huge preference from the majority of users over the use of the electronic form for their briefing and e) it has been observed that there is large lack of knowledge about e-health services. From the doctor's point of view, the following conclusions occur: a) almost all doctors use the internet as their main source of information, b) the internet has great influence over doctors’ relationship with the patients, c) in many cases a patient first makes a visit to the internet and then to the doctor, d) the internet significantly has a psychological impact on patients in order to for them to reach a decision, e) the most important reason users choose the internet instead of the health professional is economic, f) the negative consequence that emerges is inaccurate information, g) and the positive consequences are about the possibility of online contact with the doctor and contributes to the easy comprehension of the doctor, as well. Generally, it’s observed from both sides that the use of the internet in health issues is intense, which declares that the new means the doctors have at their disposal, produce the conditions for radical changes in the way of providing services and in the doctor-patient relationship.Keywords: cognitive models, health marketing, e-health, psychological factors, digital marketing, e-health services
Procedia PDF Downloads 2064690 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics
Authors: Jingsi Li, Neil S. Ferguson
Abstract:
Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management
Procedia PDF Downloads 1134689 Critical Appraisal, Smart City Initiative: China vs. India
Authors: Suneet Jagdev, Siddharth Singhal, Dhrubajyoti Bordoloi, Peesari Vamshidhar Reddy
Abstract:
There is no universally accepted definition of what constitutes a Smart City. It means different things to different people. The definition varies from place to place depending on the level of development and the willingness of people to change and reform. It tries to improve the quality of resource management and service provisions for the people living in the cities. Smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage the assets of a city. But most of these projects are misinterpreted as being technology projects only. Due to urbanization, a lot of informal as well government funded settlements have come up during the last few decades, thus increasing the consumption of the limited resources available. The people of each city have their own definition of Smart City. In the imagination of any city dweller in India is the picture of a Smart City which contains a wish list of infrastructure and services that describe his or her level of aspiration. The research involved a comparative study of the Smart City models in India and in China. Behavioral changes experienced by the people living in the pilot/first ever smart cities have been identified and compared. This paper discussed what is the target of the quality of life for the people in India and in China and how well could that be realized with the facilities being included in these Smart City projects. Logical and comparative analyses of important data have been done, collected from government sources, government papers and research papers by various experts on the topic. Existing cities with historically grown infrastructure and administration systems will require a more moderate step-by-step approach to modernization. The models were compared using many different motivators and the data is collected from past journals, interacting with the people involved, videos and past submissions. In conclusion, we have identified how these projects could be combined with the ongoing small scale initiatives by the local people/ small group of individuals and what might be the outcome if these existing practices were implemented on a bigger scale.Keywords: behavior change, mission monitoring, pilot smart cities, social capital
Procedia PDF Downloads 2894688 Technical and Practical Aspects of Sizing a Autonomous PV System
Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba
Abstract:
The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.Keywords: solar panel, solar radiation, inverter, optimization
Procedia PDF Downloads 6084687 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure
Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff
Abstract:
Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics
Procedia PDF Downloads 5264686 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention
Authors: Lawrence Williams
Abstract:
As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.Keywords: DNS, tunneling, exfiltration, botnet
Procedia PDF Downloads 754685 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe
Authors: Dimitrios Dimitriou, Maria Sartzetaki
Abstract:
In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks
Procedia PDF Downloads 3044684 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep
Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk
Abstract:
The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas
Procedia PDF Downloads 2284683 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2044682 The Collaboration between Resident and Non-resident Patent Applicants as a Strategy to Accelerate Technological Advance in Developing Nations
Authors: Hugo Rodríguez
Abstract:
Migrations of researchers, scientists, and inventors are a widespread phenomenon in modern times. In some cases, migrants stay linked to research groups in their countries of origin, either out of their own conviction or because of government policies. We examine different linear models of technological development (using the Ordinary Least Squares (OLS) technique) in eight selected countries and find that the collaborations between resident and nonresident patent applicants correlate with different levels of performance of the technological policies in three different scenarios. Therefore, the reinforcement of that link must be considered a powerful tool for technological development.Keywords: development, collaboration, patents, technology
Procedia PDF Downloads 1274681 Study on the Model Predicting Post-Construction Settlement of Soft Ground
Authors: Pingshan Chen, Zhiliang Dong
Abstract:
In order to estimate the post-construction settlement more objectively, the power-polynomial model is proposed, which can reflect the trend of settlement development based on the observed settlement data. It was demonstrated by an actual case history of an embankment, and during the prediction. Compared with the other three prediction models, the power-polynomial model can estimate the post-construction settlement more accurately with more simple calculation.Keywords: prediction, model, post-construction settlement, soft ground
Procedia PDF Downloads 4254680 Conceptual Model for Logistics Information System
Authors: Ana María Rojas Chaparro, Cristian Camilo Sarmiento Chaves
Abstract:
Given the growing importance of logistics as a discipline for efficient management of materials flow and information, the adoption of tools that permit to create facilities in making decisions based on a global perspective of the system studied has been essential. The article shows how from a concepts-based model is possible to organize and represent in appropriate way the reality, showing accurate and timely information, features that make this kind of models an ideal component to support an information system, recognizing that information as relevant to establish particularities that allow get a better performance about the evaluated sector.Keywords: system, information, conceptual model, logistics
Procedia PDF Downloads 4974679 Evaluation of the Spatial Regulation of Hydrogen Sulphide Producing Enzymes in the Placenta during Labour
Authors: F. Saleh, F. Lyall, A. Abdulsid, L. Marks
Abstract:
Background: Labour in human is a complex biological process that involves interactions of neurological, hormonal and inflammatory pathways, with the placenta being a key regulator of these pathways. It is known that uterine contractions and labour pain cause physiological changes in gene expression in maternal and fetal blood, and in placenta during labour. Oxidative and inflammatory stress pathways are implicated in labour and they may cause alteration of placental gene expression. Additionally, in placental tissues, labour increases the expression of genes involved in placental oxidative stress, inflammatory cytokines, angiogenic regulators and apoptosis. Recently, Hydrogen Sulphide (H2S) has been considered as an endogenous gaseous mediator which promotes vasodilation and exhibits cytoprotective anti-inflammatory properties. The endogenous H2S is synthesised predominantly by two enzymes: cystathionine β-synthase (CBS) and cystathionine γ-lyase (CSE). As the H2S pathway has anti-oxidative and anti-inflammatory characteristics thus, we hypothesised that the expression of CBS and CSE in placental tissues would alter during labour. Methods: CBS and CSE expressions were examined in placentas using western blotting and RT-PCR in inner, middle and outer placental zones in placentas obtained from healthy non labouring women who delivered by caesarian section. These were compared with the equivalent zone of placentas obtained from women who had uncomplicated labour and delivered vaginally. Results: No differences in CBS and CSE mRNA or protein levels were found between the different sites within placentas in either the labour or non-labour group. There were no significant differences in either CBS or CSE expression between the two groups at the inner site and middle site. However, at the outer site there was a highly significant decrease in CBS protein expression in the labour group when compared to the non-labour group (p = 0.002). Conclusion: To the best of author’s knowledge, this is the first report to suggest that, CBS is expressed in a spatial manner within the human placenta. Further work is needed to clarify the precise function and mechanism of this spatial regulation although it is likely that inflammatory pathways regulation is a complex process in which this plays a role.Keywords: anti-inflammatory, hydrogen sulphide, labour, oxidative stress
Procedia PDF Downloads 2434678 Electromagnetic Tuned Mass Damper Approach for Regenerative Suspension
Authors: S. Kopylov, C. Z. Bo
Abstract:
This study is aimed at exploring the possibility of energy recovery through the suppression of vibrations. The article describes design of electromagnetic dynamic damper. The magnetic part of the device performs the function of a tuned mass damper, thereby providing both energy regeneration and damping properties to the protected mass. According to the theory of tuned mass damper, equations of mathematical models were obtained. Then, under given properties of current system, amplitude frequency response was investigated. Therefore, main ideas and methods for further research were defined.Keywords: electromagnetic damper, oscillations with two degrees of freedom, regeneration systems, tuned mass damper
Procedia PDF Downloads 2094677 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1674676 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1594675 Revolutionizing Legal Drafting: Leveraging Artificial Intelligence for Efficient Legal Work
Authors: Shreya Poddar
Abstract:
Legal drafting and revising are recognized as highly demanding tasks for legal professionals. This paper introduces an approach to automate and refine these processes through the use of advanced Artificial Intelligence (AI). The method employs Large Language Models (LLMs), with a specific focus on 'Chain of Thoughts' (CoT) and knowledge injection via prompt engineering. This approach differs from conventional methods that depend on comprehensive training or fine-tuning of models with extensive legal knowledge bases, which are often expensive and time-consuming. The proposed method incorporates knowledge injection directly into prompts, thereby enabling the AI to generate more accurate and contextually appropriate legal texts. This approach substantially decreases the necessity for thorough model training while preserving high accuracy and relevance in drafting. Additionally, the concept of guardrails is introduced. These are predefined parameters or rules established within the AI system to ensure that the generated content adheres to legal standards and ethical guidelines. The practical implications of this method for legal work are considerable. It has the potential to markedly lessen the time lawyers allocate to document drafting and revision, freeing them to concentrate on more intricate and strategic facets of legal work. Furthermore, this method makes high-quality legal drafting more accessible, possibly reducing costs and expanding the availability of legal services. This paper will elucidate the methodology, providing specific examples and case studies to demonstrate the effectiveness of 'Chain of Thoughts' and knowledge injection in legal drafting. The potential challenges and limitations of this approach will also be discussed, along with future prospects and enhancements that could further advance legal work. The impact of this research on the legal industry is substantial. The adoption of AI-driven methods by legal professionals can lead to enhanced efficiency, precision, and consistency in legal drafting, thereby altering the landscape of legal work. This research adds to the expanding field of AI in law, introducing a method that could significantly alter the nature of legal drafting and practice.Keywords: AI-driven legal drafting, legal automation, futureoflegalwork, largelanguagemodels
Procedia PDF Downloads 654674 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange
Authors: Fatemeh Rouhi, Hadi Nassiri
Abstract:
Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences
Procedia PDF Downloads 3224673 Modelling the Effect of Alcohol Consumption on the Accelerating and Braking Behaviour of Drivers
Authors: Ankit Kumar Yadav, Nagendra R. Velaga
Abstract:
Driving under the influence of alcohol impairs the driving performance and increases the crash risks worldwide. The present study investigated the effect of different Blood Alcohol Concentrations (BAC) on the accelerating and braking behaviour of drivers with the help of driving simulator experiments. Eighty-two licensed Indian drivers drove on the rural road environment designed in the driving simulator at BAC levels of 0.00%, 0.03%, 0.05%, and 0.08% respectively. Driving performance was analysed with the help of vehicle control performance indicators such as mean acceleration and mean brake pedal force of the participants. Preliminary analysis reported an increase in mean acceleration and mean brake pedal force with increasing BAC levels. Generalized linear mixed models were developed to quantify the effect of different alcohol levels and explanatory variables such as driver’s age, gender and other driver characteristic variables on the driving performance indicators. Alcohol use was reported as a significant factor affecting the accelerating and braking performance of the drivers. The acceleration model results indicated that mean acceleration of the drivers increased by 0.013 m/s², 0.026 m/s² and 0.027 m/s² for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Results of the brake pedal force model reported that mean brake pedal force of the drivers increased by 1.09 N, 1.32 N and 1.44 N for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Age was a significant factor in both the models where one year increase in drivers’ age resulted in 0.2% reduction in mean acceleration and 19% reduction in mean brake pedal force of the drivers. It shows that driving experience could compensate for the negative effects of alcohol to some extent while driving. Female drivers were found to accelerate slower and brake harder as compared to the male drivers which confirmed that female drivers are more conscious about their safety while driving. It was observed that drivers who were regular exercisers had better control on their accelerator pedal as compared to the non-regular exercisers during drunken driving. The findings of the present study revealed that drivers tend to be more aggressive and impulsive under the influence of alcohol which deteriorates their driving performance. Drunk driving state can be differentiated from sober driving state by observing the accelerating and braking behaviour of the drivers. The conclusions may provide reference in making countermeasures against drinking and driving and contribute to traffic safety.Keywords: alcohol, acceleration, braking behaviour, driving simulator
Procedia PDF Downloads 1464672 The Feasibility and Usability of Antennas Silence Zone for Localization and Path Finding
Authors: S. Malebary, W. Xu
Abstract:
Antennas are important components that enable transmitting and receiving signals in mid-air (wireless). The radiation pattern of omni-directional (i.e., dipole) antennas, reflects the variation of power radiated by an antenna as a function of direction when transmitting. As the performance of the antenna is the same in transmitting and receiving, it also reflects the sensitivity of the antenna in different directions when receiving. The main observation when dealing with omni-directional antennas, regardless the application, is they equally radiate power in all directions in reference to Equivalent Isotropically Radiated Power (EIRP). Disseminating radio frequency signals in an omni-directional manner form a doughnut-shape-field with a cone in the middle of the elevation plane (when mounted vertically). In this paper, we investigate the existence of this physical phenomena namely silence cone zone (the zone where radiated power is nulled). First, we overview antenna types and properties that have the major impact on the shape of the electromagnetic field. Then we model various off the shelf dipoles in Matlab based on antennas’ features (dimensions, gain, operating frequency, … etc.) and compare the resulting radiation patterns. After that, we validate the existence of the null zone in Omni-directional antennas by conducting experiments and generating waveforms (using USRP1 and USRP2) at various frequencies using different types of antennas and gains in indoor/outdoor. We capture the generated waveforms around antennas' null zone in the reactive, near, and far field with a spectrum analyzer mounted on a drone, using various off the shelf antennas. We analyze the captured signals in RF-Explorer and plot the impact on received power and signal amplitude inside and around the null zone. Finally, it is concluded from evaluation and measurements the existence of null zones in Omni-directional antennas which we plan on extending this work in the near future to investigate the usability of the null zone for various applications such as localization and path finding.Keywords: antennas, amplitude, field regions, frequency, FSPL, omni-directional, radiation pattern, RSSI, silence zone cone
Procedia PDF Downloads 3034671 The Role of Disturbed Dry Afromontane Forest of Ethiopia for Biodiversity Conservation and Carbon Storage
Authors: Mindaye Teshome, Nesibu Yahya, Carlos Moreira Miquelino Eleto Torres, Pedro Manuel Villaa, Mehari Alebachew
Abstract:
Arbagugu forest is one of the remnant dry Afromontane forests under severe anthropogenic disturbances in central Ethiopia. Despite this fact, up-to-date information is lacking about the status of the forest and its role in climate change mitigation. In this study, we evaluated the woody species composition, structure, biomass, and carbon stock in this forest. We employed a systematic random sampling design and established fifty-three sample plots (20 × 100 m) to collect the vegetation data. A total of 37 woody species belonging to 25 families were recorded. The density of seedlings, saplings, and matured trees were 1174, 101, and 84 stems ha-1, respectively. The total basal area of trees with DBH (diameter at breast height) ≥ 2 cm was 21.3 m2 ha-1. The characteristic trees of dry Afromontane Forest such as Podocarpus falcatus, Juniperus procera, and Olea europaea subsp. cuspidata exhibited a fair regeneration status. On the contrary, the least abundant species Lepidotrichilia volkensii, Canthium oligocarpum, Dovyalis verrucosa, Calpurnia aurea, and Maesa lanceolata exhibited good regeneration status. Some tree species such as Polyscias fulva, Schefflera abyssinica, Erythrina brucei, and Apodytes dimidiata lack regeneration. The total carbon stored in the forest ranged between 6.3 Mg C ha-1 and 835.6 Mg C ha-1. This value is equivalent to 639.6 Mg C ha-1. The forest had a very low number of woody species composition and diversity. The regeneration study also revealed that a significant number of tree species had unsatisfactory regeneration status. Besides, the forest had a lower carbon stock density compared with other dry Afromontane forests. This implies the urgent need for forest conservation and restoration activities by the local government, conservation practitioners, and other concerned bodies to maintain the forest and sustain the various ecosystem goods and services provided by the Arbagugu forest.Keywords: aboveground biomass, forest regeneration, climate change, biodiversity conservation, restoration
Procedia PDF Downloads 1104670 Performance of Reinforced Concrete Wall with Opening Using Analytical Model
Authors: Alaa Morsy, Youssef Ibrahim
Abstract:
Earthquake is one of the most catastrophic events, which makes enormous harm to properties and human lives. As a piece of a safe building configuration, reinforced concrete walls are given in structures to decrease horizontal displacements under seismic load. Shear walls are additionally used to oppose the horizontal loads that might be incited by the impact of wind. Reinforced concrete walls in residential buildings might have openings that are required for windows in outside walls or for doors in inside walls or different states of openings due to architectural purposes. The size, position, and area of openings may fluctuate from an engineering perspective. Shear walls can encounter harm around corners of entryways and windows because of advancement of stress concentration under the impact of vertical or horizontal loads. The openings cause a diminishing in shear wall capacity. It might have an unfavorable impact on the stiffness of reinforced concrete wall and on the seismic reaction of structures. Finite Element Method using software package ‘ANSYS ver. 12’ becomes an essential approach in analyzing civil engineering problems numerically. Now we can make various models with different parameters in short time by using ANSYS instead of doing it experimentally, which consumes a lot of time and money. Finite element modeling approach has been conducted to study the effect of opening shape, size and position in RC wall with different thicknesses under axial and lateral static loads. The proposed finite element approach has been verified with experimental programme conducted by the researchers and validated by their variables. A very good correlation has been observed between the model and experimental results including load capacity, failure mode, and lateral displacement. A parametric study is applied to investigate the effect of opening size, shape, position on different reinforced concrete wall thicknesses. The results may be useful for improving existing design models and to be applied in practice, as it satisfies both the architectural and the structural requirements.Keywords: Ansys, concrete walls, openings, out of plane behavior, seismic, shear wall
Procedia PDF Downloads 169