Search results for: traffic modeling
3351 The Investigation and Analysis of Village Remains in Jinzhong Prefecture of Shanxi Province, China
Authors: Zhang Yu
Abstract:
Shanxi Province is a province with a long history in China. The historical characteristics of Jinzhong Prefecture in Shaanxi Province are very prominent. This research has done a lot of field research and analysis, and has analyzed a large number of documents. The formation and characteristics of villages in Jinzhong Prefecture are summarized. But the remains of many areas have not been systematically discovered and analyzed. This study found that the reasons for the formation of villages are natural, cultural, traffic and economic reasons. It mainly includes water, mountain, and developed business culture during the Ming and Qing Dynasties. By analyzing the evolution characteristics of each period, the characteristics and remains of the existing villages are explained in detail. These types of relics mainly include courtyards, fortresses, and Exchange shops. This study can provide systematic guidance on the protection of future village remains.Keywords: Jinzhong Prefecture, village, features, remains
Procedia PDF Downloads 1473350 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 5103349 A Language Training Model for Pilots in Training
Authors: Aysen Handan Girginer
Abstract:
This study analyzes the possible causes of miscommunication between pilots and air traffic controllers by looking into a number of variables such as pronunciation, L1 interference, use of non-standard vocabulary. The purpose of this study is to enhance the knowledge of the aviation LSP instructors and to apply this knowledge to the design of new curriculum. A 16-item questionnaire was administered to 60 Turkish pilots who work for commercial airlines in Turkey. The questionnaire consists of 7 open-ended and 9 Likert-scale type questions. The analysis of data shows that there are certain pit holes that may cause communication problems for pilots that can be avoided through proper English language training. The findings of this study are expected to contribute to the development of new materials and to develop a language training model that is tailored to the needs of students of flight training department at the Faculty of Aeronautics and Astronautics. The results are beneficial not only to the instructors but also to the new pilots in training. Specific suggestions for aviation students’ training will be made during the presentation.Keywords: curriculum design, materials development, LSP, pilot training
Procedia PDF Downloads 3543348 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain
Authors: G. Hafner
Abstract:
A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency
Procedia PDF Downloads 4103347 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 3803346 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1213345 Modeling of Drug Distribution in the Human Vitreous
Authors: Judith Stein, Elfriede Friedmann
Abstract:
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body
Procedia PDF Downloads 1393344 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices
Authors: Mohammed Saleh Rezk, Reza Kashani
Abstract:
Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.Keywords: viscoelastic, damper, distributed damping, tuned mass damper
Procedia PDF Downloads 1103343 Asset Pricing Puzzle and GDP-Growth: Pre and Post Covid-19 Pandemic Effect on Pakistan Stock Exchange
Authors: Mohammad Azam
Abstract:
This work is an endeavor to empirically investigate the Gross Domestic Product-Growth as mediating variable between various factors and portfolio returns using a broad sample of 522 financial and non-financial firms enlisted on Pakistan Stock Exchange between January-1993 and June-2022. The study employs the Structural Equation modeling and Ordinary Least Square regression to determine the findings before and during the Covid-19 epidemiological situation, which has not received due attention by researchers. The analysis reveals that market and investment factors are redundant, whereas size and value show significant results, whereas Gross Domestic Product-Growth performs significant mediating impact for the whole time frame. Using before Covid-19 period, the results reveal that market, value, and investment are redundant, but size, profitability, and Gross Domestic Product-Growth are significant. During the Covid-19, the statistics indicate that market and investment are redundant, though size and Gross Domestic Product-Growth are highly significant, but value and profitability are moderately significant. The Ordinary Least Square regression shows that market and investment are statistically insignificant, whereas size is highly significant but value and profitability are marginally significant. Using the Gross Domestic Product-Growth augmented model, a slight growth in R-square is observed. The size, value and profitability factors are recommended to the investors for Pakistan Stock Exchange. Conclusively, in the Pakistani market, the Gross Domestic Product-Growth indicates a feeble moderating effect between risk-premia and portfolio returns.Keywords: asset pricing puzzle, mediating role of GDP-growth, structural equation modeling, COVID-19 pandemic, Pakistan stock exchange
Procedia PDF Downloads 793342 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis
Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh
Abstract:
Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literatureKeywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries
Procedia PDF Downloads 4113341 Simulation Research of City Bus Fuel Consumption during the CUEDC Australian Driving Cycle
Authors: P. Kacejko, M. Wendeker
Abstract:
The fuel consumption of city buses depends on a number of factors that characterize the technical properties of the bus and driver, as well as traffic conditions. This parameter related to greenhouse gas emissions is regulated by law in many countries. This applies to both fuel consumption and exhaust emissions. Simulation studies are a way to reduce the costs of optimization studies. The paper describes simulation research of fuel consumption city bus driving. Parameters of the developed model are based on experimental results obtained on chassis dynamometer test stand and road tests. The object of the study was a city bus equipped with a compression-ignition engine. The verified model was applied to simulate the behavior of a bus during the CUEDC Australian Driving Cycle. The results of the calculations showed a direct influence of driving dynamics on fuel consumption.Keywords: Australian Driving Cycle, city bus, diesel engine, fuel consumption
Procedia PDF Downloads 1253340 Resource Allocation Scheme For IEEE802.16 Networks
Authors: Elmabruk Laias
Abstract:
IEEE Standard 802.16 provides QoS (Quality of Service) for the applications such as Voice over IP, video streaming and high bandwidth file transfer. With the ability of broadband wireless access of an IEEE 802.16 system, a WiMAX TDD frame contains one downlink subframe and one uplink subframe. The capacity allocated to each subframe is a system parameter that should be determined based on the expected traffic conditions. a proper resource allocation scheme for packet transmissions is imperatively needed. In this paper, we present a new resource allocation scheme, called additional bandwidth yielding (ABY), to improve transmission efficiency of an IEEE 802.16-based network. Our proposed scheme can be adopted along with the existing scheduling algorithms and the multi-priority scheme without any change. The experimental results show that by using our ABY, the packet queuing delay could be significantly improved, especially for the service flows of higher-priority classes.Keywords: IEEE 802.16, WiMAX, OFDMA, resource allocation, uplink-downlink mapping
Procedia PDF Downloads 4783339 Solving Definition and Relation Problems in English Navigation Terminology
Authors: Ayşe Yurdakul, Eckehard Schnieder
Abstract:
Because of the growing multidisciplinarity and multilinguality, communication problems in different technical fields grows more and more. Therefore, each technical field has its own specific language, terminology which is characterised by the different definition of terms. In addition to definition problems, there are also relation problems between terms. Among these problems of relation, there are the synonymy, antonymy, hypernymy/hyponymy, ambiguity, risk of confusion, and translation problems etc. Thus, the terminology management system iglos of the Institute for Traffic Safety and Automation Engineering of the Technische Universität Braunschweig has the target to solve these problems by a methodological standardisation of term definitions with the aid of the iglos sign model and iglos relation types. The focus of this paper should be on solving definition and relation problems between terms in English navigation terminology.Keywords: iglos, iglos sign model, methodological resolutions, navigation terminology, common language, technical language, positioning, definition problems, relation problems
Procedia PDF Downloads 3363338 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant
Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana
Abstract:
Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle
Procedia PDF Downloads 1243337 Internet of Things: Route Search Optimization Applying Ant Colony Algorithm and Theory of Computer Science
Authors: Tushar Bhardwaj
Abstract:
Internet of Things (IoT) possesses a dynamic network where the network nodes (mobile devices) are added and removed constantly and randomly, hence the traffic distribution in the network is quite variable and irregular. The basic but very important part in any network is route searching. We have many conventional route searching algorithms like link-state, and distance vector algorithms but they are restricted to the static point to point network topology. In this paper we propose a model that uses the Ant Colony Algorithm for route searching. It is dynamic in nature and has positive feedback mechanism that conforms to the route searching. We have also embedded the concept of Non-Deterministic Finite Automata [NDFA] minimization to reduce the network to increase the performance. Results show that Ant Colony Algorithm gives the shortest path from the source to destination node and NDFA minimization reduces the broadcasting storm effectively.Keywords: routing, ant colony algorithm, NDFA, IoT
Procedia PDF Downloads 4463336 Assessing the Nutritional Characteristics and Habitat Modeling of the Comorian’s Yam (Dioscorea comorensis) in a Fragmented Landscape
Authors: Mounir Soule, Hindatou Saidou, Razafimahefa, Mohamed Thani Ibouroi
Abstract:
High levels of habitat fragmentation and loss are the main drivers of plant species extinction. They reduce the habitat quality, which is a determining factor for the reproduction of plant species, and generate strong selective pressures for habitat selection, with impacts on the reproduction and survival of individuals. The Comorian’s yam (Dioscorea comorensis) is one of the most threatened plant species of the Comoros archipelago. The species faces one of the highest rates of habitat loss worldwide (9.3 % per year) and is classified as Endangered in the IUCN red list. Despite the nutritional potential of this tuber, the Comorian’s yam cultivation remains neglected by local populations due probably to lack of knowledge on its nutritional importance and the factors driving its spatial distribution and development. In this study, we assessed the nutritional characteristics of Dioscorea comorensis and the drivers of spatial distribution and abundance to propose conservation measures and improve crop yields. To determine the nutritional characteristics, the Kjeldahl method, the Soxhlet method, and Atwater's specific calorific coefficients methods were applied for analyzing proteins, lipids, and caloric energy respectively. In addition, atomic absorption spectrometry was used to measure mineral particles. By combining species occurrences with ecological (habitat types), climatic (temperature, rainfall, etc.), and physicochemical (soil types and quality) variables, we assessed habitat suitability and spatial distribution of the species and the factors explaining the origin, persistence, distribution and competitive capacity of a species using a Species Distribution Modeling (SDM) method. The results showed that the species contains 83.37% carbohydrates, 6.37% protein, and 0.45% lipids. In 100 grams, the quantities of Calcium, Sodium, Zinc, Iron, Copper, Potassium, Phosphorus, Magnesium, and Manganese are respectively 422.70, 599.41, 223.11, 252.32, 332.20, 780.41, 444.17, 287.71 and 220.73 mg. Its PRAL index is negative (- 9.80 mEq/100 g), and its Ca/P (0.95) and Na/K (0.77) ratios are less than 1. This species provides an energy value of 357.46 Kcal per 100 g, thanks to its carbohydrates and minerals and is distinguished from others by its high protein content, offering benefits for cardiovascular health. According to our SDM, the species has a very limited distribution, restricted to forests with higher biomass, humidity, and clay. Our findings highlight how distribution patterns are related to ecological and environmental factors. They also emphasize how the Comoros yam is beneficial in terms of nutritional quality. Our results represent a basic knowledge that will help scientists and decision-makers to develop conservation strategies and to improve crop yields.Keywords: Dioscorea comorensis, nutritional characteristics, species distribution modeling, conservation strategies, crop yields improvement
Procedia PDF Downloads 403335 Mathematical Modeling to Reach Stability Condition within Rosetta River Mouth, Egypt
Authors: Ali Masria , Abdelazim Negm, Moheb Iskander, Oliver C. Saavedra
Abstract:
Estuaries play an important role in exchanging water and providing a navigational pathway for ships. These zones are very sensitive and vulnerable to any interventions in coastal dynamics. Almost major of these inlets experience coastal problems such as severe erosion, and accretion. Rosetta promontory, Egypt is an example of this environment. It suffers from many coastal problems as erosion problem along the coastline and siltation problem inside the inlet. It is due to lack of water and sediment resources as a side effect of constructing the Aswan High dam. The shoaling of the inlet leads to hindering the navigation process of fishing boats, negative impacts to estuarine and salt marsh habitat and decrease the efficiency of the cross section to transfer the flow during emergencies to the sea. This paper aims to reach a new condition of stability of Rosetta Promontory by using coastal measures to control the sediment entering, and causes shoaling inside the inlet. These coastal measures include modifying the inlet cross section by using centered jetties, eliminate the coastal dynamic in the entrance using boundary jetties. This target is achieved by using a hydrodynamic model Coastal Modeling System (CMS). Extensive field data collection (hydrographic surveys, wave data, tide data, and bed morphology) is used to build and calibrate the model. About 20 scenarios were tested to reach a suitable solution that mitigate the coastal problems at the inlet. The results show that 360 m jetty in the eastern bank with system of sand bypass from the leeside of the jetty can stabilize the estuary.Keywords: Rosetta promontory, erosion, sedimentation, inlet stability
Procedia PDF Downloads 5913334 Modeling and Characterization of Organic LED
Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma
Abstract:
It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene
Procedia PDF Downloads 5563333 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3513332 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 1233331 Enhance Construction Visual As-Built Schedule Management Using BIM Technology
Authors: Shu-Hui Jan, Hui-Ping Tserng, Shih-Ping Ho
Abstract:
Construction project control attempts to obtain real-time as-built schedule information and to eliminate project delays by effectively enhancing dynamic schedule control and management. Suitable platforms for enhancing an as-built schedule visually during the construction phase are necessary and important for general contractors. As the application of building information modeling (BIM) becomes more common, schedule management integrated with the BIM approach becomes essential to enhance visual construction management implementation for the general contractor during the construction phase. To enhance visualization of the updated as-built schedule for the general contractor, this study presents a novel system called the Construction BIM-assisted Schedule Management (ConBIM-SM) system for general contractors in
Keywords: building information modeling (BIM), construction schedule management, as-built schedule management, BIM schedule updating mechanism
Procedia PDF Downloads 3783330 Logistics Hub Location and Scheduling Model for Urban Last-Mile Deliveries
Authors: Anastasios Charisis, Evangelos Kaisar, Steven Spana, Lili Du
Abstract:
Logistics play a vital role in the prosperity of today’s cities, but current urban logistics practices are proving problematic, causing negative effects such as traffic congestion and environmental impacts. This paper proposes an alternative urban logistics system, leasing hubs inside cities for designated time intervals, and using handcarts for last-mile deliveries. A mathematical model for selecting the locations of hubs and allocating customers, while also scheduling the optimal times during the day for leasing hubs is developed. The proposed model is compared to current delivery methods requiring door-to-door truck deliveries. It is shown that truck traveled distances decrease by more than 60%. In addition, analysis shows that in certain conditions the approach can be economically competitive and successfully applied to address real problems.Keywords: hub location, last-mile, logistics, optimization
Procedia PDF Downloads 1993329 Optimization and Operation of Charging and Discharging Stations for Hybrid Cars and their Effects on the Electricity Distribution Network
Authors: Ali Heydarimoghim
Abstract:
In this paper, the optimal placement of charging and discharging stations is done to determine the location and capacity of the stations, reducing the cost of electric vehicle owners' losses, reducing the cost of distribution system losses, and reducing the costs associated with the stations. Also, observing the permissible limits of the bus voltage and the capacity of the stations and their distance are considered as constraints of the problem. Given the traffic situation in different areas of a city, we estimate the amount of energy required to charge and the amount of energy provided to discharge electric vehicles in each area. We then introduce the electricity distribution system of the city in question. Following are the scenarios for introducing the problem and introducing the objective and constraint functions. Finally, the simulation results for different scenarios are compared.Keywords: charging & discharging stations, hybrid vehicles, optimization, replacement
Procedia PDF Downloads 1413328 Design, Synthesis and Pharmacological Investigation of Novel 2-Phenazinamine Derivatives as a Mutant BCR-ABL (T315I) Inhibitor
Authors: Gajanan M. Sonwane
Abstract:
Nowadays, the entire pharmaceutical industry is facing the challenge of increasing efficiency and innovation. The major hurdles are the growing cost of research and development and a concurrent stagnating number of new chemical entities (NCEs). Hence, the challenge is to select the most druggable targets and to search the equivalent drug-like compounds, which also possess specific pharmacokinetic and toxicological properties that allow them to be developed as drugs. The present research work includes the studies of developing new anticancer heterocycles by using molecular modeling techniques. The heterocycles synthesized through such methodology are much effective as various physicochemical parameters have been already studied and the structure has been optimized for its best fit in the receptor. Hence, on the basis of the literature survey and considering the need to develop newer anticancer agents, new phenazinamine derivatives were designed by subjecting the nucleus to molecular modeling, viz., GQSAR analysis and docking studies. Simultaneously, these designed derivatives were subjected to in silico prediction of biological activity through PASS studies and then in silico toxicity risk assessment studies. In PASS studies, it was found that all the derivatives exhibited a good spectrum of biological activities confirming its anticancer potential. The toxicity risk assessment studies revealed that all the derivatives obey Lipinski’s rule. Amongst these series, compounds 4c, 5b and 6c were found to possess logP and drug-likeness values comparable with the standard Imatinib (used for anticancer activity studies) and also with the standard drug methotrexate (used for antimitotic activity studies). One of the most notable mutations is the threonine to isoleucine mutation at codon 315 (T315I), which is known to be resistant to all currently available TKI. Enzyme assay planned for confirmation of target selective activity.Keywords: drug design, tyrosine kinases, anticancer, Phenazinamine
Procedia PDF Downloads 1203327 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 943326 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 1553325 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 3503324 The Impact of Sedimentary Heterogeneity on Oil Recovery in Basin-plain Turbidite: An Outcrop Analogue Simulation Case Study
Authors: Bayonle Abiola Omoniyi
Abstract:
In turbidite reservoirs with volumetrically significant thin-bedded turbidites (TBTs), thin-pay intervals may be underestimated during calculation of reserve volume due to poor vertical resolution of conventional well logs. This paper demonstrates the strong control of bed-scale sedimentary heterogeneity on oil recovery using six facies distribution scenarios that were generated from outcrop data from the Eocene Itzurun Formation, Basque Basin (northern Spain). The variable net sand volume in these scenarios serves as a primary source of sedimentary heterogeneity impacting sandstone-mudstone ratio, sand and shale geometry and dimensions, lateral and vertical variations in bed thickness, and attribute indices. The attributes provided input parameters for modeling the scenarios. The models are 20-m (65.6 ft) thick. Simulation of the scenarios reveals that oil production is markedly enhanced where degree of sedimentary heterogeneity and resultant permeability contrast are low, as exemplified by Scenarios 1, 2, and 3. In these scenarios, bed architecture encourages better apparent vertical connectivity across intervals of laterally continuous beds. By contrast, low net-to-gross Scenarios 4, 5, and 6, have rapidly declining oil production rates and higher water cut with more oil effectively trapped in low-permeability layers. These scenarios may possess enough lateral connectivity to enable injected water to sweep oil to production well; such sweep is achieved at a cost of high-water production. It is therefore imperative to consider not only net-to-gross threshold but also facies stack pattern and related attribute indices to better understand how to effectively manage water production for optimum oil recovery from basin-plain reservoirs.Keywords: architecture, connectivity, modeling, turbidites
Procedia PDF Downloads 313323 Investigating Causes of Pavement Deterioration in Khartoum State, Sudan
Authors: Magdi Mohamed Eltayeb Zumrawi
Abstract:
It is quite essential to investigate the causes of pavement deterioration in order to select the proper maintenance technique. The objective of this study was to identify factors cause deterioration of recently constructed roads in Khartoum state. A comprehensive literature concerning the factors of road deterioration, common road defects and their causes were reviewed. Three major road projects with different deterioration reasons were selected for this study. The investigation involved field survey and laboratory testing on those projects to examine the existing pavement conditions. The results revealed that the roads investigated experienced severe failures in the forms of cracks, potholes and rutting in the wheel path. The causes of those failures were found mainly linked to poor drainage, traffic overloading, expansive subgrade soils and the use of low quality materials in construction. Based on the results, recommendations were provided to help highway engineers in selecting the most effective repair techniques for specific kinds of distresses.Keywords: pavement, deterioration, causes, failures
Procedia PDF Downloads 3573322 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data
Authors: Sachin Nagargoje
Abstract:
Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.Keywords: semi-supervised learning, clustering, recall, coverage
Procedia PDF Downloads 124