Search results for: location-allocation models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6795

Search results for: location-allocation models

4575 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 84
4574 Liposome Sterile Filtration Fouling: The Impact of Transmembrane Pressure on Performance

Authors: Hercules Argyropoulos, Thomas F. Johnson, Nigel B Jackson, Kalliopi Zourna, Daniel G. Bracewell

Abstract:

Lipid encapsulation has become essential in drug delivery, notably for mRNA vaccines during the COVID-19 pandemic. However, their sterile filtration poses challenges due to the risk of deformation, filter fouling and product loss from adsorption onto the membrane. Choosing the right filtration membrane is crucial to maintain sterility and integrity while minimizing product loss. The objective of this study is to develop a rigorous analytical framework utilizing confocal microscopy and filtration blocking models to elucidate the fouling mechanisms of liposomes as a model system for this class of delivery vehicle during sterile filtration, particularly in response to variations in transmembrane pressure (TMP) during the filtration process. Experiments were conducted using fluorescent Lipoid S100 PC liposomes formulated by micro fluidization and characterized by Multi-Angle Dynamic Light Scattering. Dual-layer PES/PES and PES/PVDF membranes with 0.2 μm pores were used for filtration under constant pressure, cycling from 30 psi to 5 psi and back to 30 psi, with 5, 6, and 5-minute intervals. Cross-sectional membrane samples were prepared by microtome slicing and analyzed with confocal microscopy. Liposome characterization revealed a particle size range of 100-140 nm and an average concentration of 2.93x10¹¹ particles/mL. Goodness-of-fit analysis of flux decline data at varying TMPs identified the intermediate blocking model as most accurate at 30 psi and the cake filtration model at 5 psi. Membrane resistance analysis showed atypical behavior compared to therapeutic proteins, with resistance remaining below 1.38×10¹¹ m⁻¹ at 30 psi, increasing over fourfold at 5 psi, and then decreasing to 1-1.3-fold when pressure was returned to 30 psi. This suggests that increased flow/shear deforms liposomes enabling them to more effectively navigate membrane pores. Confocal microscopy indicated that liposome fouling mainly occurred in the upper parts of the dual-layer membrane.

Keywords: sterile filtration, membrane resistance, microfluidization, confocal microscopy, liposomes, filtration blocking models

Procedia PDF Downloads 24
4573 A New Paradigm to Make Cloud Computing Greener

Authors: Apurva Saxena, Sunita Gond

Abstract:

Demand of computation, data storage in large amount are rapidly increases day by day. Cloud computing technology fulfill the demand of today’s computation but this will lead to high power consumption in cloud data centers. Initiative for Green IT try to reduce power consumption and its adverse environmental impacts. Paper also focus on various green computing techniques, proposed models and efficient way to make cloud greener.

Keywords: virtualization, cloud computing, green computing, data center

Procedia PDF Downloads 556
4572 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 271
4571 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 124
4570 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 161
4569 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 166
4568 An Advanced Image-Based Intelligent System for Enhancing Construction Site Safety Monitoring and Analysis

Authors: Hijratullah Sharifzada, You Wang, Said Ikram Sadat, Hamza Javed, Khalid Akhunzada, Sidra Javed, Sadiq Khan

Abstract:

In the construction industry, safety is of paramount importance given the complex and dynamic nature of construction sites, which are prone to various hazards like falls from heights, being hit by falling objects, and structural collapses. Traditional safety management strategies such as manual inspections and safety training have shown significant limitations. This study presents an intelligent monitoring and analysis system for construction site safety based on an image dataset. A specifically designed Construction Site Safety Image Dataset, comprising 10 distinct classes of objects commonly found on sites, is utilized and divided into training, validation, and test subsets. InceptionV3 and MobileNetV2 are chosen as pre-trained models for feature extraction and are modified through truncation and compression to better suit the task. A novel Feature Fusion architecture is introduced, integrating these modified models along with a Squeeze-and-Excitation block. Experimental results demonstrate that the proposed model achieves a mean Average Precision (mAP) of 0.81 at an IoU threshold of 0.5, with high accuracies for classes like "Safety Cone" (91%) and "Machinery" (93%) but relatively lower accuracy for "Vehicle" (57%). The training process exhibits smooth convergence, and compared to prior methods such as YOLOv4 and SSD, the proposed framework shows superiority in precision and recall. Despite its achievements, the system has limitations, including reliance on visual data and dataset imbalance. Future research directions involve incorporating multi-modal data, conducting real-world deployments, and optimizing for edge deployment, aiming to further enhance construction site safety.

Keywords: construction site safety, intelligent monitoring system, image dataset, InceptionV3, MobileNetV2, feature fusion, squeeze-and-excitation block, mean average precision, object detection

Procedia PDF Downloads 0
4567 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 188
4566 Comparative Evaluation of Root Uptake Models for Developing Moisture Uptake Based Irrigation Schedules for Crops

Authors: Vijay Shankar

Abstract:

In the era of water scarcity, effective use of water via irrigation requires good methods for determining crop water needs. Implementation of irrigation scheduling programs requires an accurate estimate of water use by the crop. Moisture depletion from the root zone represents the consequent crop evapotranspiration (ET). A numerical model for simulating soil water depletion in the root zone has been developed by taking into consideration soil physical properties, crop and climatic parameters. The governing differential equation for unsaturated flow of water in the soil is solved numerically using the fully implicit finite difference technique. The water uptake by plants is simulated by using three different sink functions. The non-linear model predictions are in good agreement with field data and thus it is possible to schedule irrigations more effectively. The present paper describes irrigation scheduling based on moisture depletion from the different layers of the root zone, obtained using different sink functions for three cash, oil and forage crops: cotton, safflower and barley, respectively. The soil is considered at a moisture level equal to field capacity prior to planting. Two soil moisture regimes are then imposed for irrigated treatment, one wherein irrigation is applied whenever soil moisture content is reduced to 50% of available soil water; and other wherein irrigation is applied whenever soil moisture content is reduced to 75% of available soil water. For both the soil moisture regimes it has been found that the model incorporating a non-linear sink function which provides best agreement of computed root zone moisture depletion with field data, is most effective in scheduling irrigations. Simulation runs with this moisture uptake function result in saving 27.3 to 45.5% & 18.7 to 37.5%, 12.5 to 25% % &16.7 to 33.3% and 16.7 to 33.3% & 20 to 40% irrigation water for cotton, safflower and barley respectively, under 50 & 75% moisture depletion regimes over other moisture uptake functions considered in the study. Simulation developed can be used for an optimized irrigation planning for different crops, choosing a suitable soil moisture regime depending upon the irrigation water availability and crop requirements.

Keywords: irrigation water, evapotranspiration, root uptake models, water scarcity

Procedia PDF Downloads 333
4565 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter

Authors: Bartosz Kedra, Robert Malkowski

Abstract:

This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.

Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer

Procedia PDF Downloads 325
4564 Flux-Linkage Performance of DFIG Under Different Types of Faults and Locations

Authors: Mohamed Moustafa Mahmoud Sedky

Abstract:

The double-fed induction generator wind turbine has recently received a great attention. The steady state performance and response of double fed induction generator (DFIG) based wind turbine are now well understood. This paper presents the analysis of stator and rotor flux linkage dq models operation of DFIG under different faults and at different locations.

Keywords: double fed induction motor, wind energy, flux linkage, short circuit

Procedia PDF Downloads 520
4563 Indirect Intergranular Slip Transfer Modeling Through Continuum Dislocation Dynamics

Authors: A. Kalaei, A. H. W. Ngan

Abstract:

In this study, a mesoscopic continuum dislocation dynamics (CDD) approach is applied to simulate the intergranular slip transfer. The CDD scheme applies an efficient kinematics equation to model the evolution of the “all-dislocation density,” which is the line-length of dislocations of each character per unit volume. As the consideration of every dislocation line can be a limiter for the simulation of slip transfer in large scales with a large quantity of participating dislocations, a coarse-grained, extensive description of dislocations in terms of their density is utilized to resolve the effect of collective motion of dislocation lines. For dynamics closure, namely, to obtain the dislocation velocity from a velocity law involving the effective glide stress, mutual elastic interaction of dislocations is calculated using Mura’s equation after singularity removal at the core of dislocation lines. The developed scheme for slip transfer can therefore resolve the effects of the elastic interaction and pile-up of dislocations, which are important physics omitted in coarser models like crystal plasticity finite element methods (CPFEMs). Also, the length and timescales of the simulationareconsiderably larger than those in molecular dynamics (MD) and discrete dislocation dynamics (DDD) models. The present work successfully simulates that, as dislocation density piles up in front of a grain boundary, the elastic stress on the other side increases, leading to dislocation nucleation and stress relaxation when the local glide stress exceeds the operation stress of dislocation sources seeded on the other side of the grain boundary. More importantly, the simulation verifiesa phenomenological misorientation factor often used by experimentalists, namely, the ease of slip transfer increases with the product of the cosines of misorientation angles of slip-plane normals and slip directions on either side of the grain boundary. Furthermore, to investigate the effects of the critical stress-intensity factor of the grain boundary, dislocation density sources are seeded at different distances from the grain boundary, and the critical applied stress to make slip transfer happen is studied.

Keywords: grain boundary, dislocation dynamics, slip transfer, elastic stress

Procedia PDF Downloads 124
4562 The System-Dynamic Model of Sustainable Development Based on the Energy Flow Analysis Approach

Authors: Inese Trusina, Elita Jermolajeva, Viktors Gopejenko, Viktor Abramov

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the development of the way to social well-being in the frame of the ecological economics paradigm. The objective of the article is to present the results of the analysis of socio-economic systems in the context of sustainable development using the systems power (energy flows) changes analyzing method and structural Kaldor's model of GDP. In accordance with the principles of life's development and the ecological concept was formalized the tasks of sustainable development of the open, non-equilibrium, stable socio-economic systems were formalized using the energy flows analysis method. The methodology of monitoring sustainable development and level of life were considered during the research of interactions in the system ‘human - society - nature’ and using the theory of a unified system of space-time measurements. Based on the results of the analysis, the time series consumption energy and economic structural model were formulated for the level, degree and tendencies of sustainable development of the system and formalized the conditions of growth, degrowth and stationarity. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. During the research, the authors calculated and used a system of universal indicators of sustainable development in the invariant coordinate system in energy units. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. In the context of the proposed approach and methods, universal sustainable development indicators were calculated as models of development for the USA and China. The calculations used data from the World Bank database for the period from 1960 to 2019. Main results: 1) In accordance with the proposed approach, the heterogeneous energy resources of countries were reduced to universal power units, summarized and expressed as a unified number. 2) The values of universal indicators of the life’s level were obtained and compared with generally accepted similar indicators.3) The system of indicators in accordance with the requirements of sustainable development can be considered as a basis for monitoring development trends. This work can make a significant contribution to overcoming the difficulties of forming socio-economic policy, which is largely due to the lack of information that allows one to have an idea of the course and trends of socio-economic processes. The existing methods for the monitoring of the change do not fully meet this requirement since indicators have different units of measurement from different areas and, as a rule, are the reaction of socio-economic systems to actions already taken and, moreover, with a time shift. Currently, the inconsistency or inconsistency of measures of heterogeneous social, economic, environmental, and other systems is the reason that social systems are managed in isolation from the general laws of living systems, which can ultimately lead to a systemic crisis.

Keywords: sustainability, system dynamic, power, energy flows, development

Procedia PDF Downloads 60
4561 Angiogenic, Cytoprotective, and Immunosuppressive Properties of Human Amnion and Chorion-Derived Mesenchymal Stem Cells

Authors: Kenichi Yamahara, Makiko Ohshima, Shunsuke Ohnishi, Hidetoshi Tsuda, Akihiko Taguchi, Toshihiro Soma, Hiroyasu Ogawa, Jun Yoshimatsu, Tomoaki Ikeda

Abstract:

We have previously reported the therapeutic potential of rat fetal membrane(FM)-derived mesenchymal stem cells (MSCs) using various rat models including hindlimb ischemia, autoimmune myocarditis, glomerulonephritis, renal ischemia-reperfusion injury, and myocardial infarction. In this study, 1) we isolated and characterized MSCs from human amnion and chorion; 2) we examined their differences in the expression profile of growth factors and cytokines; and 3) we investigated the therapeutic potential and difference of these MSCs using murine hindlimb ischemia and acute graft-versus-host disease (GVHD) models. Isolated MSCs from both amnion and chorion layers of FM showed similar morphological appearance, multipotency, and cell-surface antigen expression. Conditioned media obtained from amnion- and chorion-derived MSCs inhibited cell death caused by serum starvation or hypoxia in endothelial cells and cardiomyocytes. Amnion and chorion MSCs secreted significant amounts of angiogenic factors including HGF, IGF-1, VEGF, and bFGF, although differences in the cellular expression profile of these soluble factors were observed. Transplantation of human amnion or chorion MSCs significantly increased blood flow and capillary density in a murine hindlimb ischemia model. In addition, compared to human chorion MSCs, human amnion MSCs markedly reduced T-lymphocyte proliferation with the enhanced secretion of PGE2, and improved the pathological situation of a mouse model of GVHD disease. Our results highlight that human amnionand chorion-derived MSCs, which showed differences in their soluble factor secretion and angiogenic/immuno-suppressive function, could be ideal cell sources for regenerative medicine.

Keywords: amnion, chorion, fetal membrane, mesenchymal stem cells

Procedia PDF Downloads 418
4560 The Effect of Values on Social Innovativeness in Nursing and Medical Faculty Students

Authors: Betül sönmez, Fatma Azizoğlu, S. Bilge Hapçıoğlu, Aytolan Yıldırım

Abstract:

Background: Social innovativeness contains the procurement of a sustainable benefit for a number of problems from working conditions to education, social development, health, and from environmental control to climate change, as well as the development of new social productions and services. Objectives: This study was conducted to determine the correlation between the social innovation tendency of nursing and medical faculty students and value types. Methods and participants: The population of this correlational study consisted of third-year students studying at a medical faculty and a nursing faculty in a public university in Istanbul. Ethics committee approval and permission from the school administrations were obtained in order to conduct the study and voluntary participation of the students in the study was ensured. 524 questionnaires were obtained with a total return rate of 57.1% (65.0% in nurse student and 52.1% in physic students). The data of the study were collected by using the Portrait Values Questionnaire and a questionnaire containing the Social Innovativeness Scale. Results: The effect of the subscale scores of Portrait Values Questionnaire on the total score of Social Innovativeness Scale was 26.6%. In the model where a significance was determined (F=37.566; p<0.01), the highest effect was observed in the subscale of universalism. The effect of subscale scores obtained from the Portrait Values Questionnaire, as well as age, gender and number of siblings was 25% on the Social Innovativeness in nursing students and 30.8% in medical faculty students. In both models where a significance was determined (p<0.01), the nursing students had the values of power, universalism and kindness, whereas the medical faculty students had the values of self-direction, stimulation, hedonism and universalism showed the highest effect in both models. Conclusions: Universalism is the value with the highest effect upon the social innovativeness in both groups, which is an expected result by the nature of professions. The effect of the values of independent thinking and self-direction, as well as openness to change involving quest for innovation (stimulation), which are observed in medical faculty students, also supports the literature of innovative behavior. These results are thought to guide educators and administrators in terms of developing socially innovative behaviors.

Keywords: social innovativeness, portrait values questionnaire, nursing students, medical faculty students

Procedia PDF Downloads 322
4559 Similar Correlation of Meat and Sugar to Global Obesity Prevalence

Authors: Wenpeng You, Maciej Henneberg

Abstract:

Background: Sugar consumption has been overwhelmingly advocated as a major dietary offender to obesity prevalence. Meat intake has been hypothesized as an obesity contributor in previous publications, but a moderate amount of meat to be included in our daily diet still has been suggested in many dietary guidelines. Comparable sugar and meat exposure data were obtained to assess the difference in relationships between the two major food groups and obesity prevalence at population level. Methods: Population level estimates of obesity and overweight rates, per capita per day exposure of major food groups (meat, sugar, starch crops, fibers, fats and fruits) and total calories, per capita per year GDP, urbanization and physical inactivity prevalence rate were extracted and matched for statistical analysis. Correlation coefficient (Pearson and partial) comparisons with Fisher’s r-to-z transformation and β range (β ± 2 SE) and overlapping in multiple linear regression (Enter and Stepwise) were used to examine potential differences in the relationships between obesity prevalence and sugar exposure and meat exposure respectively. Results: Pearson and partial correlations (controlled for total calories, physical inactivity prevalence, GDP and urbanization) analyses revealed that sugar and meat exposures correlated to obesity and overweight prevalence significantly. Fisher's r-to-z transformation did not show statistically significant difference in Pearson correlation coefficients (z=-0.53, p=0.5961) or partial correlation coefficients (z=-0.04, p=0.9681) between obesity prevalence and both sugar exposure and meat exposure. Both Enter and Stepwise models in multiple linear regression analysis showed that sugar and meat exposure were most significant predictors of obesity prevalence. Great β range overlapping in the Enter (0.289-0.573) and Stepwise (0.294-0.582) models indicated statistically sugar and meat exposure correlated to obesity without significant difference. Conclusion: Worldwide sugar and meat exposure correlated to obesity prevalence at the same extent. Like sugar, minimal meat exposure should also be suggested in the dietary guidelines.

Keywords: meat, sugar, obesity, energy surplus, meat protein, fats, insulin resistance

Procedia PDF Downloads 308
4558 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 203
4557 Applications of Greenhouse Data in Guatemala in the Analysis of Sustainability Indicators

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

In 2015, Guatemala officially adopted the Sustainable Development Goals (SDG) according to the 2030 Agenda agreed by the United Nations Organization. In 2016, these objectives and goals were reviewed, and the National Priorities were established within the K'atún 2032 National Development Plan. In 2019 and 2021, progress was evaluated with 120 defined indicators, and the need to improve quality and availability of statistical data necessary for the analysis of sustainability indicators was detected, so the values to be reached in 2024 and 2032 were adjusted. The need for greater agricultural technology is one of the priorities established within SDG 2 "Zero Hunger". Within this area, protected agricultural production provides greater productivity throughout the year, reduces the use of chemical products to control pests and diseases, reduces the negative impact of climate and improves product quality. During the crisis caused by Covid-19, there was an increase in exports of fruits and vegetables produced in greenhouses from Guatemala. However, this information has not been considered in the 2021 revision of the Plan. The objective of this study is to evaluate the information available on Greenhouse Agricultural Production and its integration into the Sustainability Indicators for Guatemala. This study was carried out in four phases: 1. Analysis of the Goals established for SDG 2 and the indicators included in the K'atún Plan. 2. Analysis of Environmental, Social and Economic Indicator Models. 3. Definition of territorial levels in 2 geographic scales: Departments and Municipalities. 4. Diagnosis of the available data on technological agricultural production with emphasis on Greenhouses at the 2 geographical scales. A summary of the results is presented for each phase and finally some recommendations for future research are added. The main contribution of this work is to improve the available data that allow the incorporation of some agricultural technology indicators in the established goals, to evaluate their impact on Food Security and Nutrition, Employment and Investment, Poverty, the use of Water and Natural Resources, and to provide a methodology applicable to other production models and other geographical areas.

Keywords: greenhouses, protected agriculture, sustainable indicators, Guatemala, sustainability, SDG

Procedia PDF Downloads 86
4556 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Keywords: agriculture 4.0, agri-food suppy chain, industry 4.0, voluntary traceability

Procedia PDF Downloads 148
4555 A Study on Reinforced Concrete Beams Enlarged with Polymer Mortar and UHPFRC

Authors: Ga Ye Kim, Hee Sun Kim, Yeong Soo Shin

Abstract:

Many studies have been done on the repair and strengthening method of concrete structure, so far. The traditional retrofit method was to attach fiber sheet such as CFRP (Carbon Fiber Reinforced Polymer), GFRP (Glass Fiber Reinforced Polymer) and AFRP (Aramid Fiber Reinforced Polymer) on the concrete structure. However, this method had many downsides in that there are a risk of debonding and an increase in displacement by a shortage of structure section. Therefore, it is effective way to enlarge the structural member with polymer mortar or Ultra-High Performance Fiber Reinforced Concrete (UHPFRC) as a means of strengthening concrete structure. This paper intends to investigate structural performance of reinforced concrete (RC) beams enlarged with polymer mortar and compare the experimental results with analytical results. Nonlinear finite element analyses were conducted to compare the experimental results and predict structural behavior of retrofitted RC beams accurately without cost consuming experimental process. In addition, this study aims at comparing differences of retrofit material between commonly used material (polymer mortar) and recently used material (UHPFRC) by conducting nonlinear finite element analyses. In the first part of this paper, the RC beams having different cover type were fabricated for the experiment and the size of RC beams was 250 millimeters in depth, 150 millimeters in width and 2800 millimeters in length. To verify the experiment, nonlinear finite element models were generated using commercial software ABAQUS 6.10-3. From this study, both experimental and analytical results demonstrated good strengthening effect on RC beam and showed similar tendency. For the future, the proposed analytical method can be used to predict the effect of strengthened RC beam. In the second part of the study, the main parameters were type of retrofit materials. The same nonlinear finite element models were generated to compare the polymer mortar with UHPFRCC. Two types of retrofit material were evaluated and retrofit effect was verified by analytical results.

Keywords: retrofit material, polymer mortar, UHPFRC, nonlinear finite element analysis

Procedia PDF Downloads 419
4554 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 248
4553 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 90
4552 Cognitive Models of Health Marketing Communication in the Digital Era: Psychological Factors, Challenges, and Implications

Authors: Panas Gerasimos, Kotidou Varvara, Halkiopoulos Constantinos, Gkintoni Evgenia

Abstract:

As a result of growing technology and briefing by the internet, users resort to the internet and subsequently to the opinion of an expert. In many cases, they take control of their health in their hand and make a decision without the contribution of a doctor. According to that, this essay intends to analyze the confidence of searching health issues on the internet. For the fulfillment of this study, there has been a survey among doctors in order to find out the reasons a patient uses the internet about their health problems and the consequences that health information could lead by searching on the internet, as well. Specifically, the results regarding the research of the users demonstrate: a) the majority of users make use of the internet about health issues once or twice a month, b) individuals that possess chronic disease make health search on the internet more frequently, c) the most important topics that the majority of users usually search are pathological, dietary issues and the search of issues that are associated with doctors and hospitals. However, it observed that topic search varies depending on the users’ age, d) the most common sources of information concern the direct contact with doctors, as there is a huge preference from the majority of users over the use of the electronic form for their briefing and e) it has been observed that there is large lack of knowledge about e-health services. From the doctor's point of view, the following conclusions occur: a) almost all doctors use the internet as their main source of information, b) the internet has great influence over doctors’ relationship with the patients, c) in many cases a patient first makes a visit to the internet and then to the doctor, d) the internet significantly has a psychological impact on patients in order to for them to reach a decision, e) the most important reason users choose the internet instead of the health professional is economic, f) the negative consequence that emerges is inaccurate information, g) and the positive consequences are about the possibility of online contact with the doctor and contributes to the easy comprehension of the doctor, as well. Generally, it’s observed from both sides that the use of the internet in health issues is intense, which declares that the new means the doctors have at their disposal, produce the conditions for radical changes in the way of providing services and in the doctor-patient relationship.

Keywords: cognitive models, health marketing, e-health, psychological factors, digital marketing, e-health services

Procedia PDF Downloads 208
4551 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 115
4550 Critical Appraisal, Smart City Initiative: China vs. India

Authors: Suneet Jagdev, Siddharth Singhal, Dhrubajyoti Bordoloi, Peesari Vamshidhar Reddy

Abstract:

There is no universally accepted definition of what constitutes a Smart City. It means different things to different people. The definition varies from place to place depending on the level of development and the willingness of people to change and reform. It tries to improve the quality of resource management and service provisions for the people living in the cities. Smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage the assets of a city. But most of these projects are misinterpreted as being technology projects only. Due to urbanization, a lot of informal as well government funded settlements have come up during the last few decades, thus increasing the consumption of the limited resources available. The people of each city have their own definition of Smart City. In the imagination of any city dweller in India is the picture of a Smart City which contains a wish list of infrastructure and services that describe his or her level of aspiration. The research involved a comparative study of the Smart City models in India and in China. Behavioral changes experienced by the people living in the pilot/first ever smart cities have been identified and compared. This paper discussed what is the target of the quality of life for the people in India and in China and how well could that be realized with the facilities being included in these Smart City projects. Logical and comparative analyses of important data have been done, collected from government sources, government papers and research papers by various experts on the topic. Existing cities with historically grown infrastructure and administration systems will require a more moderate step-by-step approach to modernization. The models were compared using many different motivators and the data is collected from past journals, interacting with the people involved, videos and past submissions. In conclusion, we have identified how these projects could be combined with the ongoing small scale initiatives by the local people/ small group of individuals and what might be the outcome if these existing practices were implemented on a bigger scale.

Keywords: behavior change, mission monitoring, pilot smart cities, social capital

Procedia PDF Downloads 291
4549 Technical and Practical Aspects of Sizing a Autonomous PV System

Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba

Abstract:

The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.

Keywords: solar panel, solar radiation, inverter, optimization

Procedia PDF Downloads 611
4548 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 528
4547 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 134
4546 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention

Authors: Lawrence Williams

Abstract:

As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.

Keywords: DNS, tunneling, exfiltration, botnet

Procedia PDF Downloads 76