Search results for: modeling and prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5714

Search results for: modeling and prediction

734 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 369
733 Modeling the Effects of Temperature on Air Pollutant Concentration

Authors: Mustapha Babatunde, Bassam Tawabini, Ole John Nielson

Abstract:

Air dispersion (AD) models such as AERMOD are important tools for estimating the environmental impacts of air pollutant emissions into the atmosphere from anthropogenic sources. The outcome of these models is significantly linked to the climate condition like air temperature, which is expected to differ in the future due to the global warming phenomenon. With projections from scientific sources of impending changes to the future climate of Saudi Arabia, especially anticipated temperature rise, there is a potential direct impact on the dispersion patterns of air pollutants results from AD models. To our knowledge, no similar studies were carried out in Saudi Arabia to investigate such impact. Therefore, this research investigates the effects of climate temperature change on air quality in the Dammam Metropolitan area, Saudi Arabia, using AERMOD coupled with Station data using Sulphur dioxide (SO2) – as a model air pollutant. The research uses AERMOD model to predict the SO2 dispersion trends on the surrounding area. Emissions from five (5) industrial stacks, on twenty-eight (28) receptors in the study area were considered for the climate period (2010-2019) and future period of mid-century (2040-2060) under different scenarios of elevated temperature profiles (+1oC, + 3oC and + 5oC) across averaging time periods of 1hr, 4hr and 8hr. Results showed that levels of SO2 at the receiving sites under current and simulated future climactic condition fall within the allowable limit of WHO and KSA air quality standards. Results also revealed that the projected rise in temperature would only have mild increment on the SO2 concentration levels. The average increase of SO2 levels were 0.04%, 0.14%, and 0.23% due to the temperature increase of 1, 3, and 5 degrees respectively. In conclusion, the outcome of this work elucidates the degree of the effects of global warming and climate changes phenomena on air quality and can help the policymakers in their decision-making, given the significant health challenges associated with ambient air pollution in Saudi Arabia.

Keywords: air quality, sulphur dioxide, global warming, air dispersion model

Procedia PDF Downloads 126
732 Combining the Fictitious Stress Method and Displacement Discontinuity Method in Solving Crack Problems in Anisotropic Material

Authors: Bahatti̇n Ki̇mençe, Uğur Ki̇mençe

Abstract:

In this study, the purpose of obtaining the influence functions of the displacement discontinuity in an anisotropic elastic medium is to produce the boundary element equations. A Displacement Discontinuous Method formulation (DDM) is presented with the aim of modeling two-dimensional elastic fracture problems. This formulation is found by analytical integration of the fundamental solution along a straight-line crack. With this purpose, Kelvin's fundamental solutions for anisotropic media on an infinite plane are used to form dipoles from singular loads, and the various combinations of the said dipoles are used to obtain the influence functions of displacement discontinuity. This study introduces a technique for coupling Fictitious Stress Method (FSM) and DDM; the reason for applying this technique to some examples is to demonstrate the effectiveness of the proposed coupling method. In this study, displacement discontinuity equations are obtained by using dipole solutions calculated with known singular force solutions in an anisotropic medium. The displacement discontinuities method obtained from the solutions of these equations and the fictitious stress methods is combined and compared with various examples. In this study, one or more crack problems with various geometries in rectangular plates in finite and infinite regions, under the effect of tensile stress with coupled FSM and DDM in the anisotropic environment, were examined, and the effectiveness of the coupled method was demonstrated. Since crack problems can be modeled more easily with DDM, it has been observed that the use of DDM has increased recently. In obtaining the displacement discontinuity equations, Papkovitch functions were used in Crouch, and harmonic functions were chosen to satisfy various boundary conditions. A comparison is made between two indirect boundary element formulations, DDM, and an extension of FSM, for solving problems involving cracks. Several numerical examples are presented, and the outcomes are contrasted to existing analytical or reference outs.

Keywords: displacement discontinuity method, fictitious stress method, crack problems, anisotropic material

Procedia PDF Downloads 71
731 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities

Authors: Hind Alshoubaki

Abstract:

The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’

Keywords: refugee, refugee camp, temporary, Zaatari

Procedia PDF Downloads 122
730 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 292
729 Using Hierarchical Modelling to Understand the Role of Plantations in the Abundance of Koalas, Phascolarctos cinereus

Authors: Kita R. Ashman, Anthony R. Rendall, Matthew R. E. Symonds, Desley A. Whisson

Abstract:

Forest cover is decreasing globally, chiefly due to the conversion of forest to agricultural landscapes. In contrast, the area under plantation forestry is increasing significantly. For wildlife occupying landscapes where native forest is the dominant land cover, plantations generally represent a lower value habitat; however, plantations established on land formerly used for pasture may benefit wildlife by providing temporary forest habitat and increasing connectivity. This study investigates the influence of landscape, site, and climatic factors on koala population density in far south-west Victoria where there has been extensive plantation establishment. We conducted koala surveys and habitat characteristic assessments at 72 sites across three habitat types: plantation, native vegetation blocks, and native vegetation strips. We employed a hierarchical modeling framework for estimating abundance and constructed candidate multinomial N-mixture models to identify factors influencing the abundance of koalas. We detected higher mean koala density in plantation sites (0.85 per ha) than in either native block (0.68 per ha) or native strip sites (0.66 per ha). We found five covariates of koala density and using these variables, we spatially modeled koala abundance and discuss factors that are key in determining large-scale distribution and density of koala populations. We provide a distribution map that can be used to identify high priority areas for population management as well as the habitat of high conservation significance for koalas. This information facilitates the linkage of ecological theory with the on-ground implementation of management actions and may guide conservation planning and resource management actions to consider overall landscape configuration as well as the spatial arrangement of plantations adjacent to the remnant forest.

Keywords: abundance modelling, arboreal mammals plantations, wildlife conservation

Procedia PDF Downloads 110
728 Optimizing Detection Methods for THz Bio-imaging Applications

Authors: C. Bolakis, I. S. Karanasiou, D. Grbovic, G. Karunasiri, N. Uzunoglu

Abstract:

A new approach for efficient detection of THz radiation in biomedical imaging applications is proposed. A double-layered absorber consisting of a 32 nm thick aluminum (Al) metallic layer, located on a glass medium (SiO2) of 1 mm thickness, was fabricated and used to design a fine-tuned absorber through a theoretical and finite element modeling process. The results indicate that the proposed low-cost, double-layered absorber can be tuned based on the metal layer sheet resistance and the thickness of various glass media taking advantage of the diversity of the absorption of the metal films in the desired THz domain (6 to 10 THz). It was found that the composite absorber could absorb up to 86% (a percentage exceeding the 50%, previously shown to be the highest achievable when using single thin metal layer) and reflect less than 1% of the incident THz power. This approach will enable monitoring of the transmission coefficient (THz transmission ‘’fingerprint’’) of the biosample with high accuracy, while also making the proposed double-layered absorber a good candidate for a microbolometer pixel’s active element. Based on the aforementioned promising results, a more sophisticated and effective double-layered absorber is under development. The glass medium has been substituted by diluted poly-si and the results were twofold: An absorption factor of 96% was reached and high TCR properties acquired. In addition, a generalization of these results and properties over the active frequency spectrum was achieved. Specifically, through the development of a theoretical equation having as input any arbitrary frequency in the IR spectrum (0.3 to 405.4 THz) and as output the appropriate thickness of the poly-si medium, the double-layered absorber retains the ability to absorb the 96% and reflects less than 1% of the incident power. As a result, through that post-optimization process and the spread spectrum frequency adjustment, the microbolometer detector efficiency could be further improved.

Keywords: bio-imaging, fine-tuned absorber, fingerprint, microbolometer

Procedia PDF Downloads 336
727 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 67
726 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan

Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva

Abstract:

Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.

Keywords: antibodies, blood serum, immunity, immunoglobulin

Procedia PDF Downloads 245
725 Exploring the Intersection of Accounting, Business, and Economics: Bridging Theory and Practice for Sustainable Growth

Authors: Stephen Acheampong Amoafoh

Abstract:

In today's dynamic economic landscape, businesses face multifaceted challenges that demand strategic foresight and informed decision-making. This abstract explores the pivotal role of financial analytics in driving business performance amidst evolving market conditions. By integrating accounting principles with economic insights, organizations can harness the power of data-driven strategies to optimize resource allocation, mitigate risks, and capitalize on emerging opportunities. This presentation will delve into the practical applications of financial analytics across various sectors, highlighting case studies and empirical evidence to underscore its efficacy in enhancing operational efficiency and fostering sustainable growth. From predictive modeling to performance benchmarking, attendees will gain invaluable insights into leveraging advanced analytics tools to drive profitability, streamline processes, and adapt to changing market dynamics. Moreover, this abstract will address the ethical considerations inherent in financial analytics, emphasizing the importance of transparency, integrity, and accountability in data-driven decision-making. By fostering a culture of ethical conduct and responsible stewardship, organizations can build trust with stakeholders and safeguard their long-term viability in an increasingly interconnected global economy. Ultimately, this abstract aims to stimulate dialogue and collaboration among scholars, practitioners, and policymakers, fostering knowledge exchange and innovation in the realms of accounting, business, and economics. Through interdisciplinary insights and actionable recommendations, participants will be equipped to navigate the complexities of today's business environment and seize opportunities for sustainable success.

Keywords: financial analytics, business performance, data-driven strategies, sustainable growth

Procedia PDF Downloads 40
724 Sustainable Development of Adsorption Solar Cooling Machine

Authors: N. Allouache, W. Elgahri, A. Gahfif, M. Belmedani

Abstract:

Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are a good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs, such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber, that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space, and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.

Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 66
723 The Mediating Role of Artificial Intelligence (AI) Driven Customer Experience in the Relationship Between AI Voice Assistants and Brand Usage Continuance

Authors: George Cudjoe Agbemabiese, John Paul Kosiba, Michael Boadi Nyamekye, Vanessa Narkie Tetteh, Caleb Nunoo, Mohammed Muniru Husseini

Abstract:

The smartphone industry continues to experience massive growth, evidenced by expanding markets and an increasing number of brands, models and manufacturers. As technology advances rapidly, manufacturers of smartphones are consistently introducing new innovations to keep up with the latest evolving industry trends and customer demand for more modern devices. This study aimed to assess the influence of artificial intelligence (AI) voice assistant (VA) on improving customer experience, resulting in the continuous use of mobile brands. Specifically, this article assesses the role of hedonic, utilitarian, and social benefits provided by AIVA on customer experience and the continuance intention to use mobile phone brands. Using a primary data collection instrument, the quantitative approach was adopted to examine the study's variables. Data from 348 valid responses were used for the analysis based on structural equation modeling (SEM) with AMOS version 23. Three main factors were identified to influence customer experience, which results in continuous usage of mobile phone brands. These factors are social benefits, hedonic benefits, and utilitarian benefits. In conclusion, a significant and positive relationship exists between the factors influencing customer experience for continuous usage of mobile phone brands. The study concludes that mobile brands that invest in delivering positive user experiences are in a better position to improve usage and increase preference for their brands. The study recommends that mobile brands consider and research their prospects' and customers' social, hedonic, and utilitarian needs to provide them with desired products and experiences.

Keywords: artificial intelligence, continuance usage, customer experience, smartphone industry

Procedia PDF Downloads 68
722 Effect of Size and Soil Characteristic on Contribution of Side and Tip Resistance of the Drilled Shafts Axial Load Carrying Capacity

Authors: Mehrak Zargaryaeghoubi, Masood Hajali

Abstract:

Drilled shafts are the most popular of deep foundations, because they have the capability that one single shaft can easily carry the entire load of a large column from a bridge or tall building. Drilled shaft may be an economical alternative to pile foundations because a pile cap is not needed, which not only reduces that expense, but also provides a rough surface in the border of soil and concrete to carry a more axial load. Due to the larger construction sizes of drilled shafts, they have an excellent axial load carrying capacity. Part of the axial load carrying capacity of the drilled shaft is resisted by the soil below the tip of the shaft which is tip resistance and the other part is resisted by the friction developed around the drilled shaft which is side resistance. The condition at the bottom of the excavation can affect the end bearing capacity of the drilled shaft. Also, type of the soil and size of the drilled shaft can affect the frictional resistance. The main loads applied on the drilled shafts are axial compressive loads. It is important to know how many percent of the maximum applied load will be shed inside friction and how much will be transferred to the base. The axial capacity of the drilled shaft foundation is influenced by the size of the drilled shaft, and soil characteristics. In this study, the effect of the size and soil characteristic will be investigated on the contribution of side resistance and end-bearing capacity. Also, the study presents a three-dimensional finite element modeling of a drilled shaft subjected to axial load using ANSYS. The top displacement and settlement of the drilled shaft are verified with analytical results. The soil profile is considered as Table 1 and for a drilled shaft with 7 ft diameter and 95 ft length the stresses in z-direction are calculated through the length of the shaft. From the stresses in z-direction through the length of the shaft the side resistance can be calculated and with the z-direction stress at the tip, the tip resistance can be calculated. The result of the side and tip resistance for this drilled shaft are compared with the analytical results.

Keywords: Drilled Shaft Foundation, size and soil characteristic, axial load capacity, Finite Element

Procedia PDF Downloads 372
721 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 225
720 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 113
719 Genomic Resilience and Ecological Vulnerability in Coffea Arabica: Insights from Whole Genome Resequencing at Its Center of Origin

Authors: Zewdneh Zana Zate

Abstract:

The study focuses on the evolutionary and ecological genomics of both wild and cultivated Coffea arabica L. at its center of origin, Ethiopia, aiming to uncover how this vital species may withstand future climate changes. Utilizing bioclimatic models, we project the future distribution of Arabica under varied climate scenarios for 2050 and 2080, identifying potential conservation zones and immediate risk areas. Through whole-genome resequencing of accessions from Ethiopian gene banks, this research assesses genetic diversity and divergence between wild and cultivated populations. It explores relationships, demographic histories, and potential hybridization events among Coffea arabica accessions to better understand the species' origins and its connection to parental species. This genomic analysis also seeks to detect signs of natural or artificial selection across populations. Integrating these genomic discoveries with ecological data, the study evaluates the current and future ecological and genomic vulnerabilities of wild Coffea arabica, emphasizing necessary adaptations for survival. We have identified key genomic regions linked to environmental stress tolerance, which could be crucial for breeding more resilient Arabica varieties. Additionally, our ecological modeling predicted a contraction of suitable habitats, urging immediate conservation actions in identified key areas. This research not only elucidates the evolutionary history and adaptive strategies of Arabica but also informs conservation priorities and breeding strategies to enhance resilience to climate change. By synthesizing genomic and ecological insights, we provide a robust framework for developing effective management strategies aimed at sustaining Coffea arabica, a species of profound global importance, in its native habitat under evolving climatic conditions.

Keywords: coffea arabica, climate change adaptation, conservation strategies, genomic resilience

Procedia PDF Downloads 31
718 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: machine-learning, habitability, exoplanets, supercomputing

Procedia PDF Downloads 81
717 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: exoplanets, habitability, machine-learning, supercomputing

Procedia PDF Downloads 103
716 Investigating Role of Novel Molecular Players in Forebrain Roof-Plate Midline Invagination

Authors: Mohd Ali Abbas Zaidi, Meenu Sachdeva, Jonaki Sen

Abstract:

In the vertebrate embryo, the forebrain anlagen develops from the anterior-most region of the neural tube which is the precursor of the central nervous system (CNS). The roof plate located at the dorsal midline region of the forebrain anlagen, acts as a source of several secreted molecules involved in patterning and morphogenesis of the forebrain. One such key morphogenetic event is the invagination of the forebrain roof plate which results in separation of the single forebrain vesicle into two cerebral hemispheres. Retinoic acid (RA) signaling plays a key role in this process. Blocking RA signaling at the dorsal forebrain midline inhibits dorsal invagination and results in the absence of certain key features of this region, such as thinning of the neuroepithelium and a lowering of cell proliferation. At present we are investigating the possibility of other signaling pathways acting in concert with RA signaling to regulate this process. We have focused on BMP signaling, which we found to be active in a mutually exclusive domain to that of RA signaling within the roof plate. We have also observed that there is a change in BMP signaling activity on modulation of RA signaling indicating an antagonistic relationship between the two. Moreover, constitutive activation of BMP signaling seems to completely inhibit thinning and partially affect invagination, leaving the lowering of cell proliferation in the midline unaffected. We are employing in-silico modeling as well as molecular manipulations to investigate the relative contribution if any, of regional differences in rates of cell proliferation and thinning of the neuroepithelium towards the process of invagination. We have found expression of certain cell adhesion molecules in forebrain roof-plate whose mRNA localization across the thickness of neuroepithelium is influenced by Bmp and RA signaling, giving regional rigidity to roof plate and assisting invagination. We also found expression of certain cytoskeleton modifiers in a localized small domains in invaginating forebrain roof plate suggesting that midline invagination is under control of many factors.

Keywords: bone morphogenetic signaling, cytoskeleton, cell adhesion molecules, forebrain roof plate, retinoic acid signaling

Procedia PDF Downloads 145
715 Evaluation of Bucket Utility Truck In-Use Driving Performance and Electrified Power Take-Off Operation

Authors: Robert Prohaska, Arnaud Konan, Kenneth Kelly, Adam Ragatz, Adam Duran

Abstract:

In an effort to evaluate the in-use performance of electrified Power Take-off (PTO) usage on bucket utility trucks operating under real-world conditions, data from 20 medium- and heavy-duty vehicles operating in California, USA were collected, compiled, and analyzed by the National Renewable Energy Laboratory's (NREL) Fleet Test and Evaluation team. In this paper, duty-cycle statistical analyses of class 5, medium-duty quick response trucks and class 8, heavy-duty material handler trucks are performed to examine and characterize vehicle dynamics trends and relationships based on collected in-use field data. With more than 100,000 kilometers of driving data collected over 880+ operating days, researchers have developed a robust methodology for identifying PTO operation from in-field vehicle data. Researchers apply this unique methodology to evaluate the performance and utilization of the conventional and electric PTO systems. Researchers also created custom representative drive-cycles for each vehicle configuration and performed modeling and simulation activities to evaluate the potential fuel and emissions savings for hybridization of the tractive driveline on these vehicles. The results of these analyses statistically and objectively define the vehicle dynamic and kinematic requirements for each vehicle configuration as well as show the potential for further system optimization through driveline hybridization. Results are presented in both graphical and tabular formats illustrating a number of key relationships between parameters observed within the data set that relates specifically to medium- and heavy-duty utility vehicles operating under real-world conditions.

Keywords: drive cycle, heavy-duty (HD), hybrid, medium-duty (MD), PTO, utility

Procedia PDF Downloads 384
714 A Web Service Based Sensor Data Management System

Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh

Abstract:

The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.

Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor

Procedia PDF Downloads 205
713 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer

Authors: K. R. Sreenivas, Shaurya Kaushal

Abstract:

After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.

Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence

Procedia PDF Downloads 60
712 Process Evaluation for a Trienzymatic System

Authors: C. Müller, T. Ortmann, S. Scholl, H. J. Jördening

Abstract:

Multienzymatic catalysis can be used as an alternative to chemical synthesis or hydrolysis of polysaccharides for the production of high value oligosaccharides from cheap resources such as sucrose. However, development of multienzymatic processes is complex, especially with respect to suitable conditions for enzymes originating from different organisms. Furthermore, an optimal configuration of the catalysts in a reaction cascade has to be found. These challenges can be approached by design of experiments. The system investigated in this study is a trienzymatic catalyzed reaction which results in laminaribiose production from sucrose and comprises covalently immobilized sucrose phosphorylase (SP), glucose isomerase (GI) and laminaribiose phosphorylase (LP). Operational windows determined with design of experiments and kinetic data of the enzymes were used to optimize the enzyme ratio for maximum product formation and minimal production of byproducts. After adjustment of the enzyme activity ratio to 1: 1.74: 2.23 (SP: LP: GI), different process options were investigated in silico. The considered options included substrate dependency, the use of glucose as co-substrate and substitution of glucose isomerase by glucose addition. Modeling of batch operation in a stirred tank reactor led to yields of 44.4% whereas operation in a continuous stirred tank reactor resulted in product yields of 22.5%. The maximum yield in a bienzymatic system comprised of sucrose phosphorylase and laminaribiose phosphorylase was 67.7% with sucrose and different amounts of glucose as substrate. The experimental data was in good compliance with the process model for batch operation. The continuous operation will be investigated in further studies. Simulation of operational process possibilities enabled us to compare various operational modes regarding different aspects such as cost efficiency, with the minimum amount of expensive and time-consuming practical experiments. This gives us more flexibility in process implementation and allows us, for example, to change the production goal from laminaribiose to higher oligosaccharides.

Keywords: design of experiments, enzyme kinetics, multi-enzymatic system, in silico process development

Procedia PDF Downloads 327
711 Numerical Analysis of Gas-Particle Mixtures through Pipelines

Authors: G. Judakova, M. Bause

Abstract:

The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.

Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing

Procedia PDF Downloads 301
710 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 222
709 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 87
708 Energy Efficient Building Design in Nigeria: An Assessment of the Effect of the Sun on Energy Consumption in Residential Buildings

Authors: Ekele T. Ochedi, Ahmad H. Taki, Birgit Painter

Abstract:

The effect of the sun and its path on thermal comfort and energy consumption in residential buildings in tropical climates constitute a serious concern for designers, building owners, and users. Passive design approaches based on the sun and its path have been identified as a means of reducing energy consumption as well as enhancing thermal comfort in buildings worldwide. Hence, a thorough understanding regarding the sun path is key to achieving this. This is necessary due to energy need, poor energy supply, and distribution, energy poverty, and over-dependence on electric generators for power supply in Nigeria. These challenges call for a change in the approach to energy-related issues, especially in terms of buildings. The aim of this study is to explore the influence of building orientation, glazing and the use of shading devices on residential buildings in Nigeria. This is intended to provide data that will guide designers in the design of energy-efficient residential buildings. The paper used EnergyPlus to analyze a typical semi-detached residential building in Lokoja, Nigeria using hourly weather data for a period of 10 years. Building performance was studied as well as possible improvement regarding different orientations, glazing types and shading devices. The simulation results show some reductions in energy consumption in response to changes in building orientation, types of glazing and the use of shading devices. The results indicate 29.45% reduction in solar gains and 1.90% in annual operative temperature using natural ventilation only. This shows a huge potential to reduce energy consumption and improve people’s well-being through the use of proper building orientation, glazing and appropriate shading devices on building envelope. The study concludes that for a significant reduction in total energy consumption by residential buildings, the design should focus on multiple design options rather than concentrating on one or few building elements. Moreover, the investigation confirms that energy performance modeling can be used by building designers to take advantage of the sun and to evaluate various design options.

Keywords: energy consumption, energy-efficient buildings, glazing, thermal comfort, shading devices, solar gains

Procedia PDF Downloads 202
707 Non-linear Model of Elasticity of Compressive Strength of Concrete

Authors: Charles Horace Ampong

Abstract:

Non-linear models have been found to be useful in modeling the elasticity (measure of degree of responsiveness) of a dependent variable with respect to a set of independent variables ceteris paribus. This constant elasticity principle was applied to the dependent variable (Compressive Strength of Concrete in MPa) which was found to be non-linearly related to the independent variable (Water-Cement ratio in kg/m3) for given Ages of Concrete in days (3, 7, 28) at different levels of admixtures Superplasticizer (in kg/m3), Blast Furnace Slag (in kg/m3) and Fly Ash (in kg/m3). The levels of the admixtures were categorized as: S1=Some Plasticizer added & S0=No Plasticizer added; B1=some Blast Furnace Slag added & B0=No Blast Furnace Slag added; F1=Some Fly Ash added & F0=No Fly Ash added. The number of observations (samples) used for the research was one-hundred and thirty-two (132) in all. For Superplasticizer, it was found that Compressive Strength of Concrete was more elastic with regards to Water-Cement ratio at S1 level than at S0 level for the given ages of concrete 3, 7and 28 days. For Blast Furnace Slag, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for concrete ages 3, 7 and 28 days. For Fly Ash, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for Ages 3, 7 and 28 days. The research also tested for different combinations of the levels of Superplasticizer, Blast Furnace Slag and Fly Ash. It was found that Compressive Strength elasticity with regards to Water-Cement ratio was lowest (Elasticity=-1.746) with a combination of S0, B0 and F0 for concrete age of 3 days. This was followed by Elasticity of -1.611 with a combination of S0, B0 and F0 for a concrete of age 7 days. Next, the highest was an Elasticity of -1.414 with combination of S0, B0 and F0 for a concrete age of 28 days. Based on preceding outcomes, three (3) non-linear model equations for predicting the output elasticity of Compressive Strength of Concrete (in %) or the value of Compressive Strength of Concrete (in MPa) with regards to Water to Cement was formulated. The model equations were based on the three different ages of concrete namely 3, 7 and 28 days under investigation. The three models showed that higher elasticity translates into higher compressive strength. And the models revealed a trend of increasing concrete strength from 3 to 28 days for a given amount of water to cement ratio. Using the models, an increasing modulus of elasticity from 3 to 28 days was deduced.

Keywords: concrete, compressive strength, elasticity, water-cement

Procedia PDF Downloads 286
706 Seismic Performance of Highway Bridges with Partially Self-Centering Isolation Bearings against Near-Fault Ground Motions

Authors: Shengxin Yu

Abstract:

Earthquakes can cause varying degrees of damage to building and bridge structures. Traditional laminated natural rubber bearings (NRB) exhibit inadequate energy dissipation and restraint, particularly under near-fault ground motions, resulting in excessive displacements in the superstructure. This paper presents a composite natural rubber bearing (NFUD-NRB) incorporating two types of shape memory alloy (SMA) U-shaped dampers (UD). The bearing exhibits adjustable features, predominantly characterized by partial self-centering and multi-level energy dissipation, facilitated by nickel-titanium-based SMA (NiTi-SMA) and iron-based SMA (Fe-SMA) UDs. The hysteresis characteristics of NFUD-NRB can be tailored by manipulating the configuration of NiTi-SMA and Fe-SMA UDs. Firstly, the proposed bearing's geometric configuration and working principle are introduced. The rationality of the modeling strategy for the bearing is validated through existing experimental results. Parameterized numerical simulations are subsequently performed to investigate the partially self-centering behavior of NFUD-NRB. The findings indicate that NFUD-NRB can attain the anticipated nonlinear behavior and deliver adequate energy dissipation. Finally, the impact of NFUD-NRB on improving the seismic resilience of highway bridges is examined using the OpenSees software, with particular emphasis on the seismic performance of NFUD-NRB under near-fault ground motions. System-level analysis reveals that bridge systems equipped with NFUD-NRBs exhibit satisfactory residual deformations and higher energy dissipation than those equipped with traditional NRBs. Moreover, NFUD-NRB markedly mitigates the detrimental impacts of near-fault ground motions on the main structure of bridges.

Keywords: partially self-centering behavior, energy dissipation, natural rubber bearing, shape memory alloy, U-shaped damper, numerical investigation, near-fault ground motion

Procedia PDF Downloads 46
705 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 408