Search results for: converting models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7016

Search results for: converting models

4526 A New Paradigm to Make Cloud Computing Greener

Authors: Apurva Saxena, Sunita Gond

Abstract:

Demand of computation, data storage in large amount are rapidly increases day by day. Cloud computing technology fulfill the demand of today’s computation but this will lead to high power consumption in cloud data centers. Initiative for Green IT try to reduce power consumption and its adverse environmental impacts. Paper also focus on various green computing techniques, proposed models and efficient way to make cloud greener.

Keywords: virtualization, cloud computing, green computing, data center

Procedia PDF Downloads 552
4525 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 266
4524 Soybean Oil Based Phase Change Material for Thermal Energy Storage

Authors: Emre Basturk, Memet Vezir Kahraman

Abstract:

In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.

Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing

Procedia PDF Downloads 380
4523 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 122
4522 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 159
4521 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 157
4520 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 186
4519 Comparative Evaluation of Root Uptake Models for Developing Moisture Uptake Based Irrigation Schedules for Crops

Authors: Vijay Shankar

Abstract:

In the era of water scarcity, effective use of water via irrigation requires good methods for determining crop water needs. Implementation of irrigation scheduling programs requires an accurate estimate of water use by the crop. Moisture depletion from the root zone represents the consequent crop evapotranspiration (ET). A numerical model for simulating soil water depletion in the root zone has been developed by taking into consideration soil physical properties, crop and climatic parameters. The governing differential equation for unsaturated flow of water in the soil is solved numerically using the fully implicit finite difference technique. The water uptake by plants is simulated by using three different sink functions. The non-linear model predictions are in good agreement with field data and thus it is possible to schedule irrigations more effectively. The present paper describes irrigation scheduling based on moisture depletion from the different layers of the root zone, obtained using different sink functions for three cash, oil and forage crops: cotton, safflower and barley, respectively. The soil is considered at a moisture level equal to field capacity prior to planting. Two soil moisture regimes are then imposed for irrigated treatment, one wherein irrigation is applied whenever soil moisture content is reduced to 50% of available soil water; and other wherein irrigation is applied whenever soil moisture content is reduced to 75% of available soil water. For both the soil moisture regimes it has been found that the model incorporating a non-linear sink function which provides best agreement of computed root zone moisture depletion with field data, is most effective in scheduling irrigations. Simulation runs with this moisture uptake function result in saving 27.3 to 45.5% & 18.7 to 37.5%, 12.5 to 25% % &16.7 to 33.3% and 16.7 to 33.3% & 20 to 40% irrigation water for cotton, safflower and barley respectively, under 50 & 75% moisture depletion regimes over other moisture uptake functions considered in the study. Simulation developed can be used for an optimized irrigation planning for different crops, choosing a suitable soil moisture regime depending upon the irrigation water availability and crop requirements.

Keywords: irrigation water, evapotranspiration, root uptake models, water scarcity

Procedia PDF Downloads 330
4518 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter

Authors: Bartosz Kedra, Robert Malkowski

Abstract:

This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.

Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer

Procedia PDF Downloads 321
4517 Flux-Linkage Performance of DFIG Under Different Types of Faults and Locations

Authors: Mohamed Moustafa Mahmoud Sedky

Abstract:

The double-fed induction generator wind turbine has recently received a great attention. The steady state performance and response of double fed induction generator (DFIG) based wind turbine are now well understood. This paper presents the analysis of stator and rotor flux linkage dq models operation of DFIG under different faults and at different locations.

Keywords: double fed induction motor, wind energy, flux linkage, short circuit

Procedia PDF Downloads 516
4516 Indirect Intergranular Slip Transfer Modeling Through Continuum Dislocation Dynamics

Authors: A. Kalaei, A. H. W. Ngan

Abstract:

In this study, a mesoscopic continuum dislocation dynamics (CDD) approach is applied to simulate the intergranular slip transfer. The CDD scheme applies an efficient kinematics equation to model the evolution of the “all-dislocation density,” which is the line-length of dislocations of each character per unit volume. As the consideration of every dislocation line can be a limiter for the simulation of slip transfer in large scales with a large quantity of participating dislocations, a coarse-grained, extensive description of dislocations in terms of their density is utilized to resolve the effect of collective motion of dislocation lines. For dynamics closure, namely, to obtain the dislocation velocity from a velocity law involving the effective glide stress, mutual elastic interaction of dislocations is calculated using Mura’s equation after singularity removal at the core of dislocation lines. The developed scheme for slip transfer can therefore resolve the effects of the elastic interaction and pile-up of dislocations, which are important physics omitted in coarser models like crystal plasticity finite element methods (CPFEMs). Also, the length and timescales of the simulationareconsiderably larger than those in molecular dynamics (MD) and discrete dislocation dynamics (DDD) models. The present work successfully simulates that, as dislocation density piles up in front of a grain boundary, the elastic stress on the other side increases, leading to dislocation nucleation and stress relaxation when the local glide stress exceeds the operation stress of dislocation sources seeded on the other side of the grain boundary. More importantly, the simulation verifiesa phenomenological misorientation factor often used by experimentalists, namely, the ease of slip transfer increases with the product of the cosines of misorientation angles of slip-plane normals and slip directions on either side of the grain boundary. Furthermore, to investigate the effects of the critical stress-intensity factor of the grain boundary, dislocation density sources are seeded at different distances from the grain boundary, and the critical applied stress to make slip transfer happen is studied.

Keywords: grain boundary, dislocation dynamics, slip transfer, elastic stress

Procedia PDF Downloads 122
4515 The System-Dynamic Model of Sustainable Development Based on the Energy Flow Analysis Approach

Authors: Inese Trusina, Elita Jermolajeva, Viktors Gopejenko, Viktor Abramov

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the development of the way to social well-being in the frame of the ecological economics paradigm. The objective of the article is to present the results of the analysis of socio-economic systems in the context of sustainable development using the systems power (energy flows) changes analyzing method and structural Kaldor's model of GDP. In accordance with the principles of life's development and the ecological concept was formalized the tasks of sustainable development of the open, non-equilibrium, stable socio-economic systems were formalized using the energy flows analysis method. The methodology of monitoring sustainable development and level of life were considered during the research of interactions in the system ‘human - society - nature’ and using the theory of a unified system of space-time measurements. Based on the results of the analysis, the time series consumption energy and economic structural model were formulated for the level, degree and tendencies of sustainable development of the system and formalized the conditions of growth, degrowth and stationarity. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. During the research, the authors calculated and used a system of universal indicators of sustainable development in the invariant coordinate system in energy units. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. In the context of the proposed approach and methods, universal sustainable development indicators were calculated as models of development for the USA and China. The calculations used data from the World Bank database for the period from 1960 to 2019. Main results: 1) In accordance with the proposed approach, the heterogeneous energy resources of countries were reduced to universal power units, summarized and expressed as a unified number. 2) The values of universal indicators of the life’s level were obtained and compared with generally accepted similar indicators.3) The system of indicators in accordance with the requirements of sustainable development can be considered as a basis for monitoring development trends. This work can make a significant contribution to overcoming the difficulties of forming socio-economic policy, which is largely due to the lack of information that allows one to have an idea of the course and trends of socio-economic processes. The existing methods for the monitoring of the change do not fully meet this requirement since indicators have different units of measurement from different areas and, as a rule, are the reaction of socio-economic systems to actions already taken and, moreover, with a time shift. Currently, the inconsistency or inconsistency of measures of heterogeneous social, economic, environmental, and other systems is the reason that social systems are managed in isolation from the general laws of living systems, which can ultimately lead to a systemic crisis.

Keywords: sustainability, system dynamic, power, energy flows, development

Procedia PDF Downloads 58
4514 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking

Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz

Abstract:

Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.

Keywords: anatomy, e-learning, virtual reality, 3D model marking

Procedia PDF Downloads 99
4513 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 70
4512 Angiogenic, Cytoprotective, and Immunosuppressive Properties of Human Amnion and Chorion-Derived Mesenchymal Stem Cells

Authors: Kenichi Yamahara, Makiko Ohshima, Shunsuke Ohnishi, Hidetoshi Tsuda, Akihiko Taguchi, Toshihiro Soma, Hiroyasu Ogawa, Jun Yoshimatsu, Tomoaki Ikeda

Abstract:

We have previously reported the therapeutic potential of rat fetal membrane(FM)-derived mesenchymal stem cells (MSCs) using various rat models including hindlimb ischemia, autoimmune myocarditis, glomerulonephritis, renal ischemia-reperfusion injury, and myocardial infarction. In this study, 1) we isolated and characterized MSCs from human amnion and chorion; 2) we examined their differences in the expression profile of growth factors and cytokines; and 3) we investigated the therapeutic potential and difference of these MSCs using murine hindlimb ischemia and acute graft-versus-host disease (GVHD) models. Isolated MSCs from both amnion and chorion layers of FM showed similar morphological appearance, multipotency, and cell-surface antigen expression. Conditioned media obtained from amnion- and chorion-derived MSCs inhibited cell death caused by serum starvation or hypoxia in endothelial cells and cardiomyocytes. Amnion and chorion MSCs secreted significant amounts of angiogenic factors including HGF, IGF-1, VEGF, and bFGF, although differences in the cellular expression profile of these soluble factors were observed. Transplantation of human amnion or chorion MSCs significantly increased blood flow and capillary density in a murine hindlimb ischemia model. In addition, compared to human chorion MSCs, human amnion MSCs markedly reduced T-lymphocyte proliferation with the enhanced secretion of PGE2, and improved the pathological situation of a mouse model of GVHD disease. Our results highlight that human amnionand chorion-derived MSCs, which showed differences in their soluble factor secretion and angiogenic/immuno-suppressive function, could be ideal cell sources for regenerative medicine.

Keywords: amnion, chorion, fetal membrane, mesenchymal stem cells

Procedia PDF Downloads 413
4511 Fischer Tropsch Synthesis in Compressed Carbon Dioxide with Integrated Recycle

Authors: Kanchan Mondal, Adam Sims, Madhav Soti, Jitendra Gautam, David Carron

Abstract:

Fischer-Tropsch (FT) synthesis is a complex series of heterogeneous reactions between CO and H2 molecules (present in the syngas) on the surface of an active catalyst (Co, Fe, Ru, Ni, etc.) to produce gaseous, liquid, and waxy hydrocarbons. This product is composed of paraffins, olefins, and oxygenated compounds. The key challenge in applying the Fischer-Tropsch process to produce transportation fuels is to make the capital and production costs economically feasible relative to the comparative cost of existing petroleum resources. To meet this challenge, it is imperative to enhance the CO conversion while maximizing carbon selectivity towards the desired liquid hydrocarbon ranges (i.e. reduction in CH4 and CO2 selectivities) at high throughputs. At the same time, it is equally essential to increase the catalyst robustness and longevity without sacrificing catalyst activity. This paper focuses on process development to achieve the above. The paper describes the influence of operating parameters on Fischer Tropsch synthesis (FTS) from coal derived syngas in supercritical carbon dioxide (ScCO2). In addition, the unreacted gas and solvent recycle was incorporated and the effect of unreacted feed recycle was evaluated. It was expected that with the recycle, the feed rate can be increased. The increase in conversion and liquid selectivity accompanied by the production of narrower carbon number distribution in the product suggest that higher flow rates can and should be used when incorporating exit gas recycle. It was observed that this process was capable of enhancing the hydrocarbon selectivity (nearly 98 % CO conversion), reducing improving the carbon efficiency from 17 % to 51 % in a once through process and further converting 16 % CO2 to liquid with integrated recycle of the product gas stream and increasing the life of the catalyst. Catalyst robustness enhancement has been attributed to the absorption of heat of reaction by the compressed CO2 which reduced the formation of hotspots and the dissolution of waxes by the CO2 solvent which reduced the blinding of active sites. In addition, the recycling the product gas stream reduced the reactor footprint to one-fourth of the once through size and product fractionation utilizing the solvent effects of supercritical CO2 were realized. In addition to the negative CO2 selectivities, methane production was also inhibited and was limited to less than 1.5%. The effect of the process conditions on the life of the catalysts will also be presented. Fe based catalysts are known to have a high proclivity for producing CO2 during FTS. The data of the product spectrum and selectivity on Co and Fe-Co based catalysts as well as those obtained from commercial sources will also be presented. The measurable decision criteria were the increase in CO conversion at H2:CO ratio of 1:1 (as commonly found in coal gasification product stream) in supercritical phase as compared to gas phase reaction, decrease in CO2 and CH4 selectivity, overall liquid product distribution, and finally an increase in the life of the catalysts.

Keywords: carbon efficiency, Fischer Tropsch synthesis, low GHG, pressure tunable fractionation

Procedia PDF Downloads 236
4510 The Effect of Values on Social Innovativeness in Nursing and Medical Faculty Students

Authors: Betül sönmez, Fatma Azizoğlu, S. Bilge Hapçıoğlu, Aytolan Yıldırım

Abstract:

Background: Social innovativeness contains the procurement of a sustainable benefit for a number of problems from working conditions to education, social development, health, and from environmental control to climate change, as well as the development of new social productions and services. Objectives: This study was conducted to determine the correlation between the social innovation tendency of nursing and medical faculty students and value types. Methods and participants: The population of this correlational study consisted of third-year students studying at a medical faculty and a nursing faculty in a public university in Istanbul. Ethics committee approval and permission from the school administrations were obtained in order to conduct the study and voluntary participation of the students in the study was ensured. 524 questionnaires were obtained with a total return rate of 57.1% (65.0% in nurse student and 52.1% in physic students). The data of the study were collected by using the Portrait Values Questionnaire and a questionnaire containing the Social Innovativeness Scale. Results: The effect of the subscale scores of Portrait Values Questionnaire on the total score of Social Innovativeness Scale was 26.6%. In the model where a significance was determined (F=37.566; p<0.01), the highest effect was observed in the subscale of universalism. The effect of subscale scores obtained from the Portrait Values Questionnaire, as well as age, gender and number of siblings was 25% on the Social Innovativeness in nursing students and 30.8% in medical faculty students. In both models where a significance was determined (p<0.01), the nursing students had the values of power, universalism and kindness, whereas the medical faculty students had the values of self-direction, stimulation, hedonism and universalism showed the highest effect in both models. Conclusions: Universalism is the value with the highest effect upon the social innovativeness in both groups, which is an expected result by the nature of professions. The effect of the values of independent thinking and self-direction, as well as openness to change involving quest for innovation (stimulation), which are observed in medical faculty students, also supports the literature of innovative behavior. These results are thought to guide educators and administrators in terms of developing socially innovative behaviors.

Keywords: social innovativeness, portrait values questionnaire, nursing students, medical faculty students

Procedia PDF Downloads 320
4509 Similar Correlation of Meat and Sugar to Global Obesity Prevalence

Authors: Wenpeng You, Maciej Henneberg

Abstract:

Background: Sugar consumption has been overwhelmingly advocated as a major dietary offender to obesity prevalence. Meat intake has been hypothesized as an obesity contributor in previous publications, but a moderate amount of meat to be included in our daily diet still has been suggested in many dietary guidelines. Comparable sugar and meat exposure data were obtained to assess the difference in relationships between the two major food groups and obesity prevalence at population level. Methods: Population level estimates of obesity and overweight rates, per capita per day exposure of major food groups (meat, sugar, starch crops, fibers, fats and fruits) and total calories, per capita per year GDP, urbanization and physical inactivity prevalence rate were extracted and matched for statistical analysis. Correlation coefficient (Pearson and partial) comparisons with Fisher’s r-to-z transformation and β range (β ± 2 SE) and overlapping in multiple linear regression (Enter and Stepwise) were used to examine potential differences in the relationships between obesity prevalence and sugar exposure and meat exposure respectively. Results: Pearson and partial correlations (controlled for total calories, physical inactivity prevalence, GDP and urbanization) analyses revealed that sugar and meat exposures correlated to obesity and overweight prevalence significantly. Fisher's r-to-z transformation did not show statistically significant difference in Pearson correlation coefficients (z=-0.53, p=0.5961) or partial correlation coefficients (z=-0.04, p=0.9681) between obesity prevalence and both sugar exposure and meat exposure. Both Enter and Stepwise models in multiple linear regression analysis showed that sugar and meat exposure were most significant predictors of obesity prevalence. Great β range overlapping in the Enter (0.289-0.573) and Stepwise (0.294-0.582) models indicated statistically sugar and meat exposure correlated to obesity without significant difference. Conclusion: Worldwide sugar and meat exposure correlated to obesity prevalence at the same extent. Like sugar, minimal meat exposure should also be suggested in the dietary guidelines.

Keywords: meat, sugar, obesity, energy surplus, meat protein, fats, insulin resistance

Procedia PDF Downloads 304
4508 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 197
4507 Applications of Greenhouse Data in Guatemala in the Analysis of Sustainability Indicators

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

In 2015, Guatemala officially adopted the Sustainable Development Goals (SDG) according to the 2030 Agenda agreed by the United Nations Organization. In 2016, these objectives and goals were reviewed, and the National Priorities were established within the K'atún 2032 National Development Plan. In 2019 and 2021, progress was evaluated with 120 defined indicators, and the need to improve quality and availability of statistical data necessary for the analysis of sustainability indicators was detected, so the values to be reached in 2024 and 2032 were adjusted. The need for greater agricultural technology is one of the priorities established within SDG 2 "Zero Hunger". Within this area, protected agricultural production provides greater productivity throughout the year, reduces the use of chemical products to control pests and diseases, reduces the negative impact of climate and improves product quality. During the crisis caused by Covid-19, there was an increase in exports of fruits and vegetables produced in greenhouses from Guatemala. However, this information has not been considered in the 2021 revision of the Plan. The objective of this study is to evaluate the information available on Greenhouse Agricultural Production and its integration into the Sustainability Indicators for Guatemala. This study was carried out in four phases: 1. Analysis of the Goals established for SDG 2 and the indicators included in the K'atún Plan. 2. Analysis of Environmental, Social and Economic Indicator Models. 3. Definition of territorial levels in 2 geographic scales: Departments and Municipalities. 4. Diagnosis of the available data on technological agricultural production with emphasis on Greenhouses at the 2 geographical scales. A summary of the results is presented for each phase and finally some recommendations for future research are added. The main contribution of this work is to improve the available data that allow the incorporation of some agricultural technology indicators in the established goals, to evaluate their impact on Food Security and Nutrition, Employment and Investment, Poverty, the use of Water and Natural Resources, and to provide a methodology applicable to other production models and other geographical areas.

Keywords: greenhouses, protected agriculture, sustainable indicators, Guatemala, sustainability, SDG

Procedia PDF Downloads 83
4506 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Keywords: agriculture 4.0, agri-food suppy chain, industry 4.0, voluntary traceability

Procedia PDF Downloads 146
4505 A Study on Reinforced Concrete Beams Enlarged with Polymer Mortar and UHPFRC

Authors: Ga Ye Kim, Hee Sun Kim, Yeong Soo Shin

Abstract:

Many studies have been done on the repair and strengthening method of concrete structure, so far. The traditional retrofit method was to attach fiber sheet such as CFRP (Carbon Fiber Reinforced Polymer), GFRP (Glass Fiber Reinforced Polymer) and AFRP (Aramid Fiber Reinforced Polymer) on the concrete structure. However, this method had many downsides in that there are a risk of debonding and an increase in displacement by a shortage of structure section. Therefore, it is effective way to enlarge the structural member with polymer mortar or Ultra-High Performance Fiber Reinforced Concrete (UHPFRC) as a means of strengthening concrete structure. This paper intends to investigate structural performance of reinforced concrete (RC) beams enlarged with polymer mortar and compare the experimental results with analytical results. Nonlinear finite element analyses were conducted to compare the experimental results and predict structural behavior of retrofitted RC beams accurately without cost consuming experimental process. In addition, this study aims at comparing differences of retrofit material between commonly used material (polymer mortar) and recently used material (UHPFRC) by conducting nonlinear finite element analyses. In the first part of this paper, the RC beams having different cover type were fabricated for the experiment and the size of RC beams was 250 millimeters in depth, 150 millimeters in width and 2800 millimeters in length. To verify the experiment, nonlinear finite element models were generated using commercial software ABAQUS 6.10-3. From this study, both experimental and analytical results demonstrated good strengthening effect on RC beam and showed similar tendency. For the future, the proposed analytical method can be used to predict the effect of strengthened RC beam. In the second part of the study, the main parameters were type of retrofit materials. The same nonlinear finite element models were generated to compare the polymer mortar with UHPFRCC. Two types of retrofit material were evaluated and retrofit effect was verified by analytical results.

Keywords: retrofit material, polymer mortar, UHPFRC, nonlinear finite element analysis

Procedia PDF Downloads 416
4504 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 243
4503 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 86
4502 Cognitive Models of Health Marketing Communication in the Digital Era: Psychological Factors, Challenges, and Implications

Authors: Panas Gerasimos, Kotidou Varvara, Halkiopoulos Constantinos, Gkintoni Evgenia

Abstract:

As a result of growing technology and briefing by the internet, users resort to the internet and subsequently to the opinion of an expert. In many cases, they take control of their health in their hand and make a decision without the contribution of a doctor. According to that, this essay intends to analyze the confidence of searching health issues on the internet. For the fulfillment of this study, there has been a survey among doctors in order to find out the reasons a patient uses the internet about their health problems and the consequences that health information could lead by searching on the internet, as well. Specifically, the results regarding the research of the users demonstrate: a) the majority of users make use of the internet about health issues once or twice a month, b) individuals that possess chronic disease make health search on the internet more frequently, c) the most important topics that the majority of users usually search are pathological, dietary issues and the search of issues that are associated with doctors and hospitals. However, it observed that topic search varies depending on the users’ age, d) the most common sources of information concern the direct contact with doctors, as there is a huge preference from the majority of users over the use of the electronic form for their briefing and e) it has been observed that there is large lack of knowledge about e-health services. From the doctor's point of view, the following conclusions occur: a) almost all doctors use the internet as their main source of information, b) the internet has great influence over doctors’ relationship with the patients, c) in many cases a patient first makes a visit to the internet and then to the doctor, d) the internet significantly has a psychological impact on patients in order to for them to reach a decision, e) the most important reason users choose the internet instead of the health professional is economic, f) the negative consequence that emerges is inaccurate information, g) and the positive consequences are about the possibility of online contact with the doctor and contributes to the easy comprehension of the doctor, as well. Generally, it’s observed from both sides that the use of the internet in health issues is intense, which declares that the new means the doctors have at their disposal, produce the conditions for radical changes in the way of providing services and in the doctor-patient relationship.

Keywords: cognitive models, health marketing, e-health, psychological factors, digital marketing, e-health services

Procedia PDF Downloads 205
4501 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 111
4500 Fuels and Platform Chemicals Production from Lignocellulosic Biomass: Current Status and Future Prospects

Authors: Chandan Kundu, Sankar Bhattacharya

Abstract:

A significant disadvantage of fossil fuel energy production is the considerable amount of carbon dioxide (CO₂) released, which is one of the contributors to climate change. Apart from environmental concerns, changing fossil fuel prices have pushed society gradually towards renewable energy sources in recent years. Biomass is a plentiful and renewable resource and a source of carbon. Recent years have seen increased research interest in generating fuels and chemicals from biomass. Unlike fossil-based resources, biomass is composed of lignocellulosic material, which does not contribute to the increase in atmospheric CO₂ over a longer term. These considerations contribute to the current move of the chemical industry from non-renewable feedstock to renewable biomass. This presentation focuses on generating bio-oil and two major platform chemicals that can potentially improve the environment. Thermochemical processes such as pyrolysis are considered viable methods for producing bio-oil and biomass-based platform chemicals. Fluidized bed reactors, on the other hand, are known to boost bio-oil yields during pyrolysis due to their superior mixing and heat transfer features, as well as their scalability. This review and the associated experimental work are focused on the thermochemical conversion of biomass to bio-oil and two high-value platform chemicals, Levoglucosenone (LGO) and 5-Chloromethyl furfural (5-CMF), in a fluidized bed reactor. These two active molecules with distinct features can potentially be useful monomers in the chemical and pharmaceutical industries since they are well adapted to the manufacture of biologically active products. This process took several meticulous steps. To begin, the biomass was delignified using a peracetic acid pretreatment to remove lignin. Because of its complicated structure, biomass must be pretreated to remove the lignin, increasing access to the carbohydrate components and converting them to platform chemicals. The biomass was then characterized by Thermogravimetric analysis, Synchrotron-based THz spectroscopy, and in-situ DRIFTS in the laboratory. Based on the results, a continuous-feeding fluidized bed reactor system was constructed to generate platform chemicals from pretreated biomass using hydrogen chloride acid-gas as a catalyst. The procedure also yields biochar, which has a number of potential applications, including soil remediation, wastewater treatment, electrode production, and energy resource utilization. Consequently, this research also includes a preliminary experimental evaluation of the biochar's prospective applications. The biochar obtained was evaluated for its CO₂ and steam reactivity. The outline of the presentation will comprise the following: Biomass pretreatment for effective delignification Mechanistic study of the thermal and thermochemical conversion of biomass Thermochemical conversion of untreated and pretreated biomass in the presence of an acid catalyst to produce LGO and CMF A thermo-catalytic process for the production of LGO and 5-CMF in a continuously-fed fluidized bed reactor and efficient separation of chemicals Use of biochar generated from the platform chemicals production through gasification

Keywords: biomass, pretreatment, pyrolysis, levoglucosenone

Procedia PDF Downloads 139
4499 Critical Appraisal, Smart City Initiative: China vs. India

Authors: Suneet Jagdev, Siddharth Singhal, Dhrubajyoti Bordoloi, Peesari Vamshidhar Reddy

Abstract:

There is no universally accepted definition of what constitutes a Smart City. It means different things to different people. The definition varies from place to place depending on the level of development and the willingness of people to change and reform. It tries to improve the quality of resource management and service provisions for the people living in the cities. Smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage the assets of a city. But most of these projects are misinterpreted as being technology projects only. Due to urbanization, a lot of informal as well government funded settlements have come up during the last few decades, thus increasing the consumption of the limited resources available. The people of each city have their own definition of Smart City. In the imagination of any city dweller in India is the picture of a Smart City which contains a wish list of infrastructure and services that describe his or her level of aspiration. The research involved a comparative study of the Smart City models in India and in China. Behavioral changes experienced by the people living in the pilot/first ever smart cities have been identified and compared. This paper discussed what is the target of the quality of life for the people in India and in China and how well could that be realized with the facilities being included in these Smart City projects. Logical and comparative analyses of important data have been done, collected from government sources, government papers and research papers by various experts on the topic. Existing cities with historically grown infrastructure and administration systems will require a more moderate step-by-step approach to modernization. The models were compared using many different motivators and the data is collected from past journals, interacting with the people involved, videos and past submissions. In conclusion, we have identified how these projects could be combined with the ongoing small scale initiatives by the local people/ small group of individuals and what might be the outcome if these existing practices were implemented on a bigger scale.

Keywords: behavior change, mission monitoring, pilot smart cities, social capital

Procedia PDF Downloads 288
4498 Technical and Practical Aspects of Sizing a Autonomous PV System

Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba

Abstract:

The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.

Keywords: solar panel, solar radiation, inverter, optimization

Procedia PDF Downloads 606
4497 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 523