Search results for: building’s screens modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7751

Search results for: building’s screens modeling

5231 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 119
5230 Modeling of Drug Distribution in the Human Vitreous

Authors: Judith Stein, Elfriede Friedmann

Abstract:

The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.

Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body

Procedia PDF Downloads 137
5229 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices

Authors: Mohammed Saleh Rezk, Reza Kashani

Abstract:

Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.

Keywords: viscoelastic, damper, distributed damping, tuned mass damper

Procedia PDF Downloads 107
5228 Asset Pricing Puzzle and GDP-Growth: Pre and Post Covid-19 Pandemic Effect on Pakistan Stock Exchange

Authors: Mohammad Azam

Abstract:

This work is an endeavor to empirically investigate the Gross Domestic Product-Growth as mediating variable between various factors and portfolio returns using a broad sample of 522 financial and non-financial firms enlisted on Pakistan Stock Exchange between January-1993 and June-2022. The study employs the Structural Equation modeling and Ordinary Least Square regression to determine the findings before and during the Covid-19 epidemiological situation, which has not received due attention by researchers. The analysis reveals that market and investment factors are redundant, whereas size and value show significant results, whereas Gross Domestic Product-Growth performs significant mediating impact for the whole time frame. Using before Covid-19 period, the results reveal that market, value, and investment are redundant, but size, profitability, and Gross Domestic Product-Growth are significant. During the Covid-19, the statistics indicate that market and investment are redundant, though size and Gross Domestic Product-Growth are highly significant, but value and profitability are moderately significant. The Ordinary Least Square regression shows that market and investment are statistically insignificant, whereas size is highly significant but value and profitability are marginally significant. Using the Gross Domestic Product-Growth augmented model, a slight growth in R-square is observed. The size, value and profitability factors are recommended to the investors for Pakistan Stock Exchange. Conclusively, in the Pakistani market, the Gross Domestic Product-Growth indicates a feeble moderating effect between risk-premia and portfolio returns.

Keywords: asset pricing puzzle, mediating role of GDP-growth, structural equation modeling, COVID-19 pandemic, Pakistan stock exchange

Procedia PDF Downloads 73
5227 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis

Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh

Abstract:

Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literature

Keywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries

Procedia PDF Downloads 408
5226 Service Life Study of Polymers Used in Renovation of Heritage Buildings and Other Structures

Authors: Parastou Kharazmi

Abstract:

Degradation of building materials particularly pipelines causes environmental damage during renovation or replacement and is a time consuming and costly process. Rehabilitation by polymer composites is a solution for renovation of degraded pipeline in heritage buildings and other structures which are less costly, faster and causes less damage to the environment; however, it is still not clear for how long these materials can perform as expected in the field and working condition. To study their service life, two types of composites based on Epoxy and Polyester resins have been evaluated by accelerated exposure and field exposure. The primary degradation agent used in accelerated exposure has been cycling temperature with half of the tests performed in presence of water. Thin films of materials used in accelerated testing were prepared in laboratory by using the same amount of material as well as technique of multi-layers application used in majority of the field installations. Extreme intensity levels of degradation agents have been used only to evaluate materials properties and as also mentioned in ISO 15686, are not directly correlated with degradation mechanisms that would be experienced in service. In the field exposure study, the focus has been to identify possible failure modes, causes, and effects. In field exposure, it has been observed that there are other degradation agents present which can be investigated further such as presence of contaminants and rust before application which prevents formation of a uniform layer of polymer or incompatibility between dissimilar materials. This part of the study also highlighted the importance of application’s quality of the materials in the field for providing the expected performance and service life. Results from extended accelerated exposure and field exposure can help in choosing inspection techniques, establishing the primary degradation agents and can be used for ageing exposure programs with clarifying relationship between different exposure periods and sites.

Keywords: building, renovation, service life, pipelines

Procedia PDF Downloads 189
5225 Assessing the Nutritional Characteristics and Habitat Modeling of the Comorian’s Yam (Dioscorea comorensis) in a Fragmented Landscape

Authors: Mounir Soule, Hindatou Saidou, Razafimahefa, Mohamed Thani Ibouroi

Abstract:

High levels of habitat fragmentation and loss are the main drivers of plant species extinction. They reduce the habitat quality, which is a determining factor for the reproduction of plant species, and generate strong selective pressures for habitat selection, with impacts on the reproduction and survival of individuals. The Comorian’s yam (Dioscorea comorensis) is one of the most threatened plant species of the Comoros archipelago. The species faces one of the highest rates of habitat loss worldwide (9.3 % per year) and is classified as Endangered in the IUCN red list. Despite the nutritional potential of this tuber, the Comorian’s yam cultivation remains neglected by local populations due probably to lack of knowledge on its nutritional importance and the factors driving its spatial distribution and development. In this study, we assessed the nutritional characteristics of Dioscorea comorensis and the drivers of spatial distribution and abundance to propose conservation measures and improve crop yields. To determine the nutritional characteristics, the Kjeldahl method, the Soxhlet method, and Atwater's specific calorific coefficients methods were applied for analyzing proteins, lipids, and caloric energy respectively. In addition, atomic absorption spectrometry was used to measure mineral particles. By combining species occurrences with ecological (habitat types), climatic (temperature, rainfall, etc.), and physicochemical (soil types and quality) variables, we assessed habitat suitability and spatial distribution of the species and the factors explaining the origin, persistence, distribution and competitive capacity of a species using a Species Distribution Modeling (SDM) method. The results showed that the species contains 83.37% carbohydrates, 6.37% protein, and 0.45% lipids. In 100 grams, the quantities of Calcium, Sodium, Zinc, Iron, Copper, Potassium, Phosphorus, Magnesium, and Manganese are respectively 422.70, 599.41, 223.11, 252.32, 332.20, 780.41, 444.17, 287.71 and 220.73 mg. Its PRAL index is negative (- 9.80 mEq/100 g), and its Ca/P (0.95) and Na/K (0.77) ratios are less than 1. This species provides an energy value of 357.46 Kcal per 100 g, thanks to its carbohydrates and minerals and is distinguished from others by its high protein content, offering benefits for cardiovascular health. According to our SDM, the species has a very limited distribution, restricted to forests with higher biomass, humidity, and clay. Our findings highlight how distribution patterns are related to ecological and environmental factors. They also emphasize how the Comoros yam is beneficial in terms of nutritional quality. Our results represent a basic knowledge that will help scientists and decision-makers to develop conservation strategies and to improve crop yields.

Keywords: Dioscorea comorensis, nutritional characteristics, species distribution modeling, conservation strategies, crop yields improvement

Procedia PDF Downloads 31
5224 Mathematical Modeling to Reach Stability Condition within Rosetta River Mouth, Egypt

Authors: Ali Masria , Abdelazim Negm, Moheb Iskander, Oliver C. Saavedra

Abstract:

Estuaries play an important role in exchanging water and providing a navigational pathway for ships. These zones are very sensitive and vulnerable to any interventions in coastal dynamics. Almost major of these inlets experience coastal problems such as severe erosion, and accretion. Rosetta promontory, Egypt is an example of this environment. It suffers from many coastal problems as erosion problem along the coastline and siltation problem inside the inlet. It is due to lack of water and sediment resources as a side effect of constructing the Aswan High dam. The shoaling of the inlet leads to hindering the navigation process of fishing boats, negative impacts to estuarine and salt marsh habitat and decrease the efficiency of the cross section to transfer the flow during emergencies to the sea. This paper aims to reach a new condition of stability of Rosetta Promontory by using coastal measures to control the sediment entering, and causes shoaling inside the inlet. These coastal measures include modifying the inlet cross section by using centered jetties, eliminate the coastal dynamic in the entrance using boundary jetties. This target is achieved by using a hydrodynamic model Coastal Modeling System (CMS). Extensive field data collection (hydrographic surveys, wave data, tide data, and bed morphology) is used to build and calibrate the model. About 20 scenarios were tested to reach a suitable solution that mitigate the coastal problems at the inlet. The results show that 360 m jetty in the eastern bank with system of sand bypass from the leeside of the jetty can stabilize the estuary.

Keywords: Rosetta promontory, erosion, sedimentation, inlet stability

Procedia PDF Downloads 587
5223 Modeling and Characterization of Organic LED

Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma

Abstract:

It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.

Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene

Procedia PDF Downloads 554
5222 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 348
5221 Exploring the Challenges of Post-conflict Peacebuilding in the Border Districts of Eastern Zone of Tigray Region

Authors: Gebreselassie Sebhatleab

Abstract:

According to the Global Peace Index report (GPI, 2023), global peacefulness has deteriorated by more than 0.42%. Old and new conflicts, COVID-19, and political and cultural polarization are the main drivers of conflicts in the world. The 2022 was the deadliest year for armed conflict in the history of the GPI. In Ethiopia, over half a million people died in the Tigray war, which was the largest conflict death event since the 1994 Rwandan genocide. In total, 84 countries recorded an improvement, while 79 countries recorded a deterioration in peacefulness across the globe. The Russia-Ukraine war and its consequences were the main drivers of the deterioration in peacefulness globally. Both Russia and Ukraine are now ranked amongst the ten least peaceful countries, and Ukraine had the largest deterioration of any country in the 2023 GPI. In the same year, the global impact of violence on the economy was 17 percent, which was equivalent to 10.9% of global GDP. Besides, the brutal conflict in Tigray started in November. 2020 claimed more than half a million lives lost and displaced nearly 3 million people, along with widespread human rights violations and sexual violence has left deep damage on the population. The displaced people are still unable to return home because the western, southern and Eastern parts of Tigray are occupied by Eritrean and Amhara forces, despite the Pretoria Agreement. Currently, armed conflicts in Amhara in the Oromya regions are intensified, and human rights violations are being reported in both regions. Meanwhile, protests have been held by war-injured TDF members, IDPs and teachers in the Tigray region. Hence, the general objective of this project is to explore the challenges of peace-building processes in the border woredas of the Eastern Zone of the Tigray Region. Methodologically, the project will employ exploratory qualitative research designs to gather and analyze qualitative data. A purposive sampling technique will be applied to gather pertinent information from the key stakeholders. Open-ended interview questions will be prepared to gather relevant information about the challenges and perceptions of peacebuilding in the study area. Data will be analyzed using qualitative methods such as content analysis, narrative analysis and phenomenological analysis to deeply investigate the challenges of peace-building in the study woredas. Findings of this research project will be employed for program intervention to promote sustainable peace in the study area.

Keywords: peace building, conflcit and violence, political instability, insecurity

Procedia PDF Downloads 39
5220 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121
5219 Universal Design Implementation in a Private University; Investment, Decision Making, Perceptions and the Value of Social Capital

Authors: Sridara Tipian, Henry Skates Jr., Antika Sawadsri

Abstract:

It is widely recognized that universal design should be implemented as broadly as possible to benefit as many groups and sub groups of people within a society. In Thailand, public buildings such as public universities are obvious places where the benefits of universal design principles are easily appreciated and applied, but there are other building types such as private universities where the benefits may not be just as obvious. In these buildings, the implementation of universal design is not always achieved. There are many reasons given for this among which is the perceived additional cost of implementation. This paper argues that social capital should be taken into consideration when such decisions are being made. The paper investigates the background, principles and theories pertaining to universal design and using a case study of a private university, investigates the implementation of universal design against the background of current legislation and the perceptions of the private university administrators. The study examines the physical facilities of the case study university in the context of current theories and principles of universal design alongside the legal requirements for same. A survey of building users evaluates knowledge of and attitudes to universal design. The research shows that although administrators perceive the initial cost of investment to be prohibitive in the short term, in the long term, changes in societal values in relation to social inclusiveness are changing and that the social capital of investing in universal design should not be underestimated. The results of this study should provide greater incentive for the enforcement of the legal requirements for universal design in Thailand.

Keywords: public buildings, physical facilities, social capital private university, investment, decision making, value, enforcement, legal requirements

Procedia PDF Downloads 275
5218 Design, Synthesis and Pharmacological Investigation of Novel 2-Phenazinamine Derivatives as a Mutant BCR-ABL (T315I) Inhibitor

Authors: Gajanan M. Sonwane

Abstract:

Nowadays, the entire pharmaceutical industry is facing the challenge of increasing efficiency and innovation. The major hurdles are the growing cost of research and development and a concurrent stagnating number of new chemical entities (NCEs). Hence, the challenge is to select the most druggable targets and to search the equivalent drug-like compounds, which also possess specific pharmacokinetic and toxicological properties that allow them to be developed as drugs. The present research work includes the studies of developing new anticancer heterocycles by using molecular modeling techniques. The heterocycles synthesized through such methodology are much effective as various physicochemical parameters have been already studied and the structure has been optimized for its best fit in the receptor. Hence, on the basis of the literature survey and considering the need to develop newer anticancer agents, new phenazinamine derivatives were designed by subjecting the nucleus to molecular modeling, viz., GQSAR analysis and docking studies. Simultaneously, these designed derivatives were subjected to in silico prediction of biological activity through PASS studies and then in silico toxicity risk assessment studies. In PASS studies, it was found that all the derivatives exhibited a good spectrum of biological activities confirming its anticancer potential. The toxicity risk assessment studies revealed that all the derivatives obey Lipinski’s rule. Amongst these series, compounds 4c, 5b and 6c were found to possess logP and drug-likeness values comparable with the standard Imatinib (used for anticancer activity studies) and also with the standard drug methotrexate (used for antimitotic activity studies). One of the most notable mutations is the threonine to isoleucine mutation at codon 315 (T315I), which is known to be resistant to all currently available TKI. Enzyme assay planned for confirmation of target selective activity.

Keywords: drug design, tyrosine kinases, anticancer, Phenazinamine

Procedia PDF Downloads 116
5217 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 90
5216 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 153
5215 Approaches to Tsunami Mitigation and Prevention: Explaining Architectural Strategies for Reducing Urban Risk

Authors: Hedyeh Gamini, Hadi Abdus

Abstract:

Tsunami, as a natural disaster, is composed of waves that are usually caused by severe movements at the sea floor. Although tsunami and its consequences cannot be prevented in any way, by examining past tsunamis and extracting key points on how to deal with this incident and learning from it, a positive step can be taken to reduce the vulnerability of human settlements and reduce the risk of this phenomenon in architecture and urbanism. The method is reviewing and has examined the documents written and valid internet sites related to managing and reducing the vulnerability of human settlements in face of tsunami. This paper has explored the tsunamis in Indonesia (2004), Sri Lanka (2004) and Japan (2011), and of the study objectives has been understanding how they dealt with tsunami and extracting key points, and the lessons from them in terms of reduction of vulnerability of human settlements in dealing with the tsunami. Finally, strategies to prevent and reduce the vulnerability of communities at risk of tsunamis have been offered in terms of architecture and urban planning. According to what is obtained from the study of the recent tsunamis, the authorities' quality of dealing with them, how to manage the crisis and the manner of their construction, it can be concluded that to reduce the vulnerability of human settlements against tsunami, there are generally four ways that are: 1-Construction of tall buildings with opening on the first floor so that water can flow easily under and the direction of the building should be in a way that water passes easily from the side. 2- The construction of multi-purpose centers, which could be used as vertical evacuation during accidents. 3- Constructing buildings in core forms with diagonal orientation of the coastline, 4- Building physical barriers (natural and synthetic) such as water dams, mounds of earth, sea walls and creating forests

Keywords: tsunami, architecture, reducing vulnerability, human settlements, urbanism

Procedia PDF Downloads 395
5214 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks

Authors: Adrian Ionita, Ana-Maria Ghimes

Abstract:

The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.

Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling

Procedia PDF Downloads 163
5213 The Use of Rotigotine to Improve Hemispatial Neglect in Stroke Patients at the Charing Cross Neurorehabilitation Unit

Authors: Malab Sana Balouch, Meenakshi Nayar

Abstract:

Hemispatial Neglect is a common disorder primarily associated with right hemispheric stroke, in the acute phase of which it can occur up to 82% of the time. Such individuals fail to acknowledge or respond to people and objects in their left field of vision due to deficits in attention and awareness. Persistent hemispatial neglect significantly impedes post-stroke recovery, leading to longer hospital stays post-stroke, increased functional dependency, longer-term disability in ADLs and increased risk of falls. Recently, evidence has emerged for the use of dopamine agonist Rotigotine in neglect. The aim of our Quality Improvement Project (QIP) is to evaluate and better the current protocols and practice in assessment, documentation and management of neglect and rotigotine use at the Neurorehabilitation unit at Charing Cross Hospital (CNRU). In addition, it brings light to rotigotine use in the management of hemispatial neglect and paves the way for future research in the field. Our QIP was based in the CNRU. All patients admitted to the CNRU suffering from a right-sided stroke from 2nd of February 2018 to the 2nd of February 2021 were included in the project. Each patient’s multidisciplinary team report and hospital notes were searched for information, including bio-data, fulfilment of the inclusion criteria (having hemispatial neglect) and data related to rotigotine use. This includes whether or not the drug was administered, any contraindications to drug in patients that did not receive it, and any therapeutic benefits(subjective or objective improvement in neglect) in those that did receive the rotigotine. Data was simultaneously entered into excel sheet and further statistical analysis was done on SPSS 20.0. Out of 80 patients suffering from right sided strokes, 72.5% were infarcts and 27.5% were hemorrhagic strokes, with vast majority of both types of strokes were in the middle cerebral artery territory (MCA). A total of 31 (38.8%) of our patients were noted to have hemispatial neglect, with the highest number of cases being associated with MCA strokes. Almost half of our patients with MCA strokes suffered from neglect. Neglect was more common in male patients. Out of the 31 patients suffering from visuospatial neglect, only 16% actually received rotigotine and 80% of them were noted to have an objective improvement in their neglect tests and 20% revealed subjective improvement. After thoroughly going through neglect-associated documentation, the following recommendations/plans were put in place for the future. We plan to liaise with the occupational therapy team at our rehab unit to set a battery of tests that would be done on all patients presenting with neglect and recommend clear documentation of outcomes of each neglect screen under it. Also to create two proformas; one for the therapy team to aid in systematic documentation of neglect screens done prior to and after rotigotine administration and a second proforma for the medical team with clear documentation of rotigotine use, its benefits and any contraindications if not administered.

Keywords: hemispatial Neglect, right hemispheric stroke, rotigotine, neglect, dopamine agonist

Procedia PDF Downloads 73
5212 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 348
5211 The Impact of Sedimentary Heterogeneity on Oil Recovery in Basin-plain Turbidite: An Outcrop Analogue Simulation Case Study

Authors: Bayonle Abiola Omoniyi

Abstract:

In turbidite reservoirs with volumetrically significant thin-bedded turbidites (TBTs), thin-pay intervals may be underestimated during calculation of reserve volume due to poor vertical resolution of conventional well logs. This paper demonstrates the strong control of bed-scale sedimentary heterogeneity on oil recovery using six facies distribution scenarios that were generated from outcrop data from the Eocene Itzurun Formation, Basque Basin (northern Spain). The variable net sand volume in these scenarios serves as a primary source of sedimentary heterogeneity impacting sandstone-mudstone ratio, sand and shale geometry and dimensions, lateral and vertical variations in bed thickness, and attribute indices. The attributes provided input parameters for modeling the scenarios. The models are 20-m (65.6 ft) thick. Simulation of the scenarios reveals that oil production is markedly enhanced where degree of sedimentary heterogeneity and resultant permeability contrast are low, as exemplified by Scenarios 1, 2, and 3. In these scenarios, bed architecture encourages better apparent vertical connectivity across intervals of laterally continuous beds. By contrast, low net-to-gross Scenarios 4, 5, and 6, have rapidly declining oil production rates and higher water cut with more oil effectively trapped in low-permeability layers. These scenarios may possess enough lateral connectivity to enable injected water to sweep oil to production well; such sweep is achieved at a cost of high-water production. It is therefore imperative to consider not only net-to-gross threshold but also facies stack pattern and related attribute indices to better understand how to effectively manage water production for optimum oil recovery from basin-plain reservoirs.

Keywords: architecture, connectivity, modeling, turbidites

Procedia PDF Downloads 24
5210 Shape Management Method for Safety Evaluation of Bridge Based on Terrestrial Laser Scanning Using Least Squares

Authors: Gichun Cha, Dongwan Lee, Junkyeong Kim, Aoqi Zhang, Seunghee Park

Abstract:

All the world are studying the construction technology of double deck tunnel in order to respond to the increasing urban traffic demands and environmental changes. Advanced countries have the construction technology of the double deck tunnel structure. but the domestic country began research on it. Construction technologies are important. But Safety evaluation of structure is necessary to prevent possible accidents during construction. Thus, the double deck tunnel was required the shape management of middle slabs. The domestic country is preparing the construction of double deck tunnel for an alternate route and a pleasant urban environment. Shape management of double deck tunnel has been no research because it is a new attempted technology. The present, a similar study is bridge structure for the shape management. Bridge is implemented shape model using terrestrial laser scanning(TLS). Therefore, we proceed research on the bridge slabs because there is a similar structure of double deck tunnel. In the study, we develop shape management method of bridge slabs using TLS. We select the Test-bed for measurement site. This site is bridge located on Sungkyunkwan University Natural Sciences Campus. This bridge has a total length of 34m, the vertical height of 8.7m from the ground. It connects Engineering Building #1 and Engineering Building #2. Point cloud data for shape management is acquired the TLS and We utilized the Leica ScanStation C10/C5 model. We will confirm the Maximum displacement area of middle slabs using Least-Squares Fitting. We expect to raise stability for double deck tunnel through shape management for middle slabs.

Keywords: bridge slabs, least squares, safety evaluation, shape management method, terrestrial laser scanning

Procedia PDF Downloads 241
5209 Energy Retrofitting Application Research to Achieve Energy Efficiency in Hot-Arid Climates in Residential Buildings: A Case Study of Saudi Arabia

Authors: A. Felimban, A. Prieto, U. Knaack, T. Klein

Abstract:

This study aims to present an overview of recent research in building energy-retrofitting strategy applications and analyzing them within the context of hot arid climate regions which is in this case study represented by the Kingdom of Saudi Arabia. The main goal of this research is to do an analytical study of recent research approaches to show where the primary gap in knowledge exists and outline which possible strategies are available that can be applied in future research. Also, the paper focuses on energy retrofitting strategies at a building envelop level. The study is limited to specific measures within the hot arid climate region. Scientific articles were carefully chosen as they met the expression criteria, such as retrofitting, energy-retrofitting, hot-arid, energy efficiency, residential buildings, which helped narrow the research scope. Then the papers were explored through descriptive analysis and justified results within the Saudi context in order to draw an overview of future opportunities from the field of study for the last two decades. The conclusions of the analysis of the recent research confirmed that the field of study had a research shortage on investigating actual applications and testing of newly introduced energy efficiency applications, lack of energy cost feasibility studies and there was also a lack of public awareness. In terms of research methods, it was found that simulation software was a major instrument used in energy retrofitting application research. The main knowledge gaps that were identified included the need for certain research regarding actual application testing; energy retrofitting strategies application feasibility; the lack of research on the importance of how strategies apply first followed by the user acceptance of developed scenarios.

Keywords: energy efficiency, energy retrofitting, hot arid, Saudi Arabia

Procedia PDF Downloads 122
5208 Efficacy of Conservation Strategies for Endangered Garcinia gummi gutta under Climate Change in Western Ghats

Authors: Malay K. Pramanik

Abstract:

Climate change is continuously affecting the ecosystem, species distribution as well as global biodiversity. The assessment of the species potential distribution and the spatial changes under various climate change scenarios is a significant step towards the conservation and mitigation of habitat shifts, and species' loss and vulnerability. In this context, the present study aimed to predict the influence of current and future climate on an ecologically vulnerable medicinal species, Garcinia gummi-gutta, of the southern Western Ghats using Maximum Entropy (MaxEnt) modeling. The future projections were made for the period of 2050 and 2070 with RCP (Representative Concentration Pathways) scenario of 4.5 and 8.5 using 84 species occurrence data, and climatic variables from three different models of Intergovernmental Panel for Climate Change (IPCC) fifth assessment. Climatic variables contributions were assessed using jackknife test and AOC value 0.888 indicates the model perform with high accuracy. The major influencing variables will be annual precipitation, precipitation of coldest quarter, precipitation seasonality, and precipitation of driest quarter. The model result shows that the current high potential distribution of the species is around 1.90% of the study area, 7.78% is good potential; about 90.32% is moderate to very low potential for species suitability. Finally, the results of all model represented that there will be a drastic decline in the suitable habitat distribution by 2050 and 2070 for all the RCP scenarios. The study signifies that MaxEnt model might be an efficient tool for ecosystem management, biodiversity protection, and species re-habitation planning under climate change.

Keywords: Garcinia gummi gutta, maximum entropy modeling, medicinal plants, climate change, western ghats, MaxEnt

Procedia PDF Downloads 392
5207 Family Firms Performance: Examining the Impact of Digital and Technological Capabilities using Partial Least Squares Structural Equation Modeling and Necessary Condition Analysis

Authors: Pedro Mota Veiga

Abstract:

This study comprehensively evaluates the repercussions of innovation, digital advancements, and technological capabilities on the operational performance of companies across fifteen European Union countries following the initial wave of the COVID-19 pandemic. Drawing insights from longitudinal data sourced from the 2019 World Bank business surveys and subsequent 2020 World Bank COVID-19 follow-up business surveys, our extensive examination involves a diverse sample of 5763 family businesses. In exploring the relationships between these variables, we adopt a nuanced approach to assess the impact of innovation and digital and technological capabilities on performance. This analysis unfolds along two distinct perspectives: one rooted in necessity and the other insufficiency. The methodological framework employed integrates partial least squares structural equation modeling (PLS-SEM) with condition analysis (NCA), providing a robust foundation for drawing meaningful conclusions. The findings of the study underscore a positive influence on the performance of family firms stemming from both technological capabilities and digital advancements. Furthermore, it is pertinent to highlight the indirect contribution of innovation to enhanced performance, operating through its impact on digital capabilities. This research contributes valuable insights to the broader understanding of how innovation, coupled with digital and technological capabilities, can serve as pivotal factors in shaping the post-COVID-19 landscape for businesses across the European Union. The intricate analysis of family businesses, in particular adds depth to the comprehension of the dynamics at play in diverse economic contexts within the European Union.

Keywords: digital capabilities, technological capabilities, family firms performance, innovation, NCA, PLS-SEM

Procedia PDF Downloads 63
5206 The Roles of Muslims Scholars in Minifying Religious Extremism for Religious Tolerance and Peace Building in Nigeria

Authors: Mukhtar Sarkin-Kebbi

Abstract:

Insurgency, religious extremism and other related religious crises become hydra-headed in Nigeria, which caused destruction of human lives and properties worth of billions naira. As result, millions people were displaced and million children were out of school most of whom from Muslims community. The wrong teaching and misinterpretation of Islam by some Muslim community fuel the spread of extremist ideology hatred among Muslim sects, non-Muslims and emergency of extremist groups, like Boko Haram. A multi-religious country like Nigeria to realise its development in all human aspects, there must be unity and religious tolerance. Many agreed that changing the ideologies of insurgents and religious extremism will require intellectual role with vigorous campaign. Muslim scholars can play a vital role in promoting social reform and peaceful coexistence. This paper discusses the importance of unity among Muslim community and religious tolerance in light of the Qur’an and the Hadith. The paper also reviews the relationship between Muslims and non Muslims during the life time the Prophet (S.A.W.) in order to serve as exemplary model. Contemporary issues such as religious extremism, sectarians, intolerance and their consequences were examined. To minify religious intolerance and extremism,the paper identifies the roles to be played by Muslim scholars with references from Qur’an and Sunnah. The paper concludes that to realise overall human development and eternal salvation, Muslim should shun away from any religious crises and embrace unity and religious tolerance. Finally the paper recommends among others that only pious and learned scholars should be allowed to preach in any religious gathering, Muslim should exercise patience, tolerance in dealing with Muslims and non Muslims. Muslims should leave by example from the teaching of Qur’an and Sunnah of the Prophet (S.A.W.).

Keywords: Muslim scholars, peace building, religious extremism, religious tolerance

Procedia PDF Downloads 213
5205 The Role of Urban Agriculture in Enhancing Food Supply and Export Potential: A Case Study of Neishabour, Iran

Authors: Mohammadreza Mojtahedi

Abstract:

Rapid urbanization presents multifaceted challenges, including environmental degradation and public health concerns. As the inevitability of urban sprawl continues, it becomes essential to devise strategies to alleviate its pressures on natural ecosystems and elevate socio-economic benchmarks within cities. This research investigates urban agriculture's economic contributions, emphasizing its pivotal role in food provisioning and export potential. Adopting a descriptive-analytical approach, field survey data was primarily collected via questionnaires. The tool's validity was affirmed by expert opinions, and its reliability secured by achieving a Cronbach's alpha score over 0.70 from 30 preliminary questionnaires. The research encompasses Neishabour's populace of 264,375, extracting a sample size of 384 via Cochran's formula. Findings reveal the significance of urban agriculture in food supply and its potential for exports, underlined by a p-value < 0.05. Neishabour's urban farming can augment the export of organic commodities, fruits, vegetables, ornamental plants, and foster product branding. Moreover, it supports the provision of fresh produce, bolstering dietary quality. Urban agriculture further impacts urban development metrics—enhancing environmental quality, job opportunities, income levels, and aesthetics, while promoting rainwater utilization. Popular cultivations include peaches, Damask roses, and poultry, tailored to available spaces. Structural equation modeling indicates urban agriculture's overarching influence, accounting for a 56% variance, predominantly in food sufficiency and export proficiency.

Keywords: urban agriculture, food supply, export potential, urban development, environmental health, structural equation modeling

Procedia PDF Downloads 56
5204 Vulnerability of Steel Moment-Frame Buildings with Pinned and, Alternatively, with Semi-Rigid Connections

Authors: Daniel Llanes, Alfredo Reyes, Sonia E. Ruiz, Federico Valenzuela Beltran

Abstract:

Steel frames have been used in building construction for more than one hundred years. Beam-column may be connected to columns using either stiffened or unstiffened angles at the top and bottom beam flanges. Designers often assume that these assemblies acted as “pinned” connections for gravity loads and that the stiffened connections would act as “fixed” connections for lateral loads. Observation of damages sustained by buildings during the 1994 Northridge earthquake indicated that, contrary to the intended behavior, in many cases, brittle fractures initiated within the connections at very low levels of plastic demand, and in some cases, while the structures remained essentially elastic. Due to the damage presented in these buildings other type of alternative connections have been proposed. According to a research funded by the Federal Emergency Management Agency (FEMA), the screwed connections have better performance when they are subjected to cyclic loads, but at the same time, these connections have some degree of flexibility. Due to this situation, some researchers ventured into the study of semi-rigid connections. In the present study three steel buildings, constituted by regular frames are analyzed. Two types of connections are considered: pinned and semi-rigid connections. With the aim to estimate their structural capacity, a number of incremental dynamic analyzes are performed. 3D structural models are used for the analyses. The seismic ground motions were recorded on sites near Los Angeles, California, where the structures are supposed to be located. The vulnerability curves of the building are obtained in terms of maximum inter-story drifts. The vulnerability curves (which correspond to the models with two different types of connections) are compared, and its implications on its structural design and performance is discussed.

Keywords: steel frame Buildings, vulnerability curves, semi-rigid connections, pinned connections

Procedia PDF Downloads 225
5203 CFD Modeling of Air Stream Pressure Drop inside Combustion Air Duct of Coal-Fired Power Plant with and without Airfoil

Authors: Pakawhat Khumkhreung, Yottana Khunatorn

Abstract:

The flow pattern inside rectangular intake air duct of 300 MW lignite coal-fired power plant is investigated in order to analyze and reduce overall inlet system pressure drop. The system consists of the 45-degree inlet elbow, the flow instrument, the 90-degree mitered elbow and fans, respectively. The energy loss in each section can be determined by Bernoulli’s equation and ASHRAE standard table. Hence, computational fluid dynamics (CFD) is used in this study based on Navier-Stroke equation and the standard k-epsilon turbulence modeling. Input boundary condition is 175 kg/s mass flow rate inside the 11-m2 cross sectional duct. According to the inlet air flow rate, the Reynolds number of airstream is 2.7x106 (based on the hydraulic duct diameter), thus the flow behavior is turbulence. The numerical results are validated with the real operation data. It is found that the numerical result agrees well with the operating data, and dominant loss occurs at the flow rate measurement device. Normally, the air flow rate is measured by the airfoil and it gets high pressure drop inside the duct. To overcome this problem, the airfoil is planned to be replaced with the other type measuring instrument, such as the average pitot tube which generates low pressure drop of airstream. The numerical result in case of average pitot tube shows that the pressure drop inside the inlet airstream duct is decreased significantly. It should be noted that the energy consumption of inlet air system is reduced too.

Keywords: airfoil, average pitot tube, combustion air, CFD, pressure drop, rectangular duct

Procedia PDF Downloads 157
5202 Conceptualizing a Strategic Facilities Management Decision Framework for Heritage Building Maintenance Management

Authors: Adegoriola Mayowa I., Lai Joseph H. K., Yung Esther H. K., Chan Edwin H. K.

Abstract:

Heritage buildings (HBs) as structures with historical and architectural relevance that form an integral part of contemporary society. These buildings deserve to be protected for as long as possible to retain their significance. Therefore, the need to prioritize HB maintenance management is pertinent. However, the decision-making process of HBMM can be relatively daunting. The decision-making challenge may be attributed to the multiple 'stakeholders' expectation and requirement which needs to be met. To this end, professionals in the built environment have identified the need to apply the strategic concept of facilities management (FM) in decision making. Furthermore, the different maintenance dimensions have been applied to maintenance management of residential, commercial, and health facilities. Unfortunately, these different maintenance approaches, such as FM, sustainable FM, urban FM, green FM, and strategic FM, are yet to be fully explored in the decision-making process of HBMM. To bridge this gap, this study focuses on developing a framework for strategic decision-making HBMM, which helps achieve HBMM sustainability. At the study's inception, relevant works of literature in the domains of HBMM and FM were conducted. This review helped in the identification of contemporary maintenance practices and their applicability to HBMM. Afterward, a conceptual framework to aid decision-making in HBMM was developed. This framework integrated the concept of FM scope (people, place, process, and technology) while ensuring that decisions plans were made at strategic, tactical, and operational levels. Also, the different characteristics of HBs and stakeholders' requirements were considered in the framework. The conceptual framework presents a holistic guide for professionals in HBMM to ensure that decision processes and outcomes are practical and efficient. It also contributes to the existing body of knowledge on the integration of FM in HBMM. Furthermore, it will serve as a basis for future studies by applying the conceptualized framework in actual cases.

Keywords: decision-making, facility management, strategy, sustainability, heritage building, maintenance

Procedia PDF Downloads 138