Search results for: assembly sequence planning
1467 Experimental Investigation, Analysis and Optimization of Performance and Emission Characteristics of Composite Oil Methyl Esters at 160 bar, 180 bar and 200 bar Injection Pressures by Multifunctional Criteria Technique
Authors: Yogish Huchaiah, Chandrashekara Krishnappa
Abstract:
This study considers the optimization and validation of experimental results using Multi-Functional Criteria Technique (MFCT). MFCT is concerned with structuring and solving decision and planning problems involving multiple variables. Production of biodiesel from Composite Oil Methyl Esters (COME) of Jatropha and Pongamia oils, mixed in various proportions and Biodiesel thus obtained from two step transesterification process were tested for various Physico-Chemical properties and it has been ascertained that they were within limits proposed by ASTME. They were blended with Petrodiesel in various proportions. These Methyl Esters were blended with Petrodiesel in various proportions and coded. These blends were used as fuels in a computerized CI DI engine to investigate Performance and Emission characteristics. From the analysis of results, it was found that 180MEM4B20 blend had the maximum Performance and minimum Emissions. To validate the experimental results, MFCT was used. Characteristics such as Fuel Consumption (FC), Brake Power (BP), Brake Specific Fuel Consumption (BSFC), Brake Thermal Efficiency (BTE), Carbon dioxide (CO2), Carbon Monoxide (CO), Hydro Carbon (HC) and Nitrogen oxide (NOx) were considered as dependent variables. It was found from the application of this method that the optimized combination of Injection Pressure (IP), Mix and Blend is 178MEM4.2B24. Overall corresponding variation between optimization and experimental results was found to be 7.45%.Keywords: COME, IP, MFCT, optimization, PI, PN, PV
Procedia PDF Downloads 2101466 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 1221465 A Study of Taiwanese Students' Language Use in the Primary International Education via Video Conferencing Course
Authors: Chialing Chang
Abstract:
Language and culture are critical foundations of international mobility. However, the students who are limited to the local environment may affect their learning outcome and global perspective. Video Conferencing has been proven an economical way for students as a medium to communicate with international students around the world. In Taiwan, the National Development Commission advocated the development of bilingual national policies in 2030 to enhance national competitiveness and foster English proficiency and fully launched bilingual activation of the education system. Globalization is closely related to the development of Taiwan's education. Therefore, the teacher conducted an integrated lesson through interdisciplinary learning. This study aims to investigate how the teacher helps develop students' global and language core competencies in the international education class. The methodology comprises four stages, which are lesson planning, class observation, learning data collection, and speech analysis. The Grice's Conversational Maxims are adopted to analyze the students' conversation in the video conferencing course. It is the action research from the teacher's reflection on approaches to developing students' language learning skills. The study lays the foundation for mastering the teacher's international education professional development and improving teachers' teaching quality and teaching effectiveness as a reference for teachers' future instruction.Keywords: international education, language learning, Grice's conversational maxims, video conferencing course
Procedia PDF Downloads 1191464 Executive Function in Youth With ADHD and ASD: A Systematic Review and Meta-analysis
Authors: Parker Townes, Prabdeep Panesar, Chunlin Liu, Soo Youn Lee, Dan Devoe, Paul D. Arnold, Jennifer Crosbie, Russell Schachar
Abstract:
Attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) are impairing childhood neurodevelopmental disorders with problems in executive functions. Executive functions are higher-level mental processes essential for daily functioning and goal attainment. There is genetic and neural overlap between ADHD and ASD. The aim of this meta-analysis was to evaluate if pediatric ASD and ADHD have distinct executive function profiles. This review was completed following Cochrane guidelines. Fifty-eight articles were identified through database searching, followed by a blinded screening in duplicate. A meta-analysis was performed for all task performance metrics evaluated by at least two articles. Forty-five metrics from 24 individual tasks underwent analysis. No differences were found between youth with ASD and ADHD in any domain under direct comparison. However, individuals with ASD and ADHD exhibited deficient attention, flexibility, visuospatial abilities, working memory, processing speed, and response inhibition compared to controls. No deficits in planning were noted in either disorder. Only 11 studies included a group with comorbid ASD+ADHD, making it difficult to determine whether common executive function deficits are a function of comorbidity. Further research is needed to determine if comorbidity accounts for the apparent commonality in executive function between ASD and ADHD.Keywords: autism spectrum disorder, ADHD, neurocognition, executive function, youth
Procedia PDF Downloads 721463 Intercultural Urbanism: Interpreting Cultural Inclusion in Traditional Precincts of Contemporary Cities: A Case of Mattancherry
Authors: Amrutha Jayan
Abstract:
The cities are attractors of the human population, offering opportunities for economic activities for different linguistic, cultural, and ethnic groups. The urban form and design of the city impact the life of these people. Social and cultural exclusions result in spatial segregation and gentrification. The spaces provided in cities must be inclusive for all these communities for them to feel part of the city and contribute to society. Intercultural urbanism is a theory and practice of city building, planning, and design of urban spaces and architectures that are cognizant of the social impact of the built environment. The postulate acknowledges cultural differences and opportunities for cultural exchange. Literature on intercultural urbanism, culture and space, spatial justice, and cultural inclusion are analyzed to identify parameters contributing to intercultural placemaking. A qualitative study on Mattancherry shows how the precinct has sustained throughout the years with different communities living together within a radius of 5 km, creating a diverse and vibrant environment. The research identifies the urban elements that contribute to intercultural interactions and maintain the synergy between these communities. The public spaces, porous edges, built-form, streets, and accessibility contribute to chance encounters and intercultural interactivity. The research seeks to find the factors that contribute to intercultural placemaking.Keywords: intercultural urbanism, cultural inclusion, spatial justice, public space
Procedia PDF Downloads 2191462 A Multilevel Approach of Reproductive Preferences and Subsequent Behavior in India
Authors: Anjali Bansal
Abstract:
Reproductive preferences mainly deal with two questions: when a couple wants children and how many they want. Questions related to these desires are often included in the fertility surveys as they can provide relevant information on the subsequent behavior. The aim of the study is to observe whether respondent’s response to these questions changed over time or not. We also tried to identify socio- economic and demographic factors associated with the stability (or instability) of fertility preferences. For this purpose, we used IHDS1 (2004-05) and follow up survey IHDS2 (2011-12) data and applied bivariate, multivariate and multilevel repeated measure analysis to it to find the consistency between responses. From the analysis, we found that preferences of women changes over the course of time as from the bivariate analysis we have found that 52% of women are not consistent in their desired family size and huge inconsistency are found in desire to continue childbearing. To get a better overlook of these inconsistencies, we have computed Intra Class Correlation (ICC) which tries to explain the consistency between individuals on their fertility responses at two time periods. We also explored that husband’s desire for additional child specifically male offspring contribute to these variations. Our findings lead us to a cessation that in India, individuals fertility preferences changed over a seven-year time period as the Intra Class correlation comes out to be very small which explains the variations among individuals. Concerted efforts should be made, therefore, to educate people, and conduct motivational programs to promote family planning for family welfare.Keywords: change, consistency, preferences, over time
Procedia PDF Downloads 1651461 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 1141460 Design Practices, Policies and Guidelines towards Implementing Architectural Passive Cooling Strategies in Public Library Buildings in Temperate Climates
Authors: Lesley Metibogun, Regan Potangaroa
Abstract:
Some existing sustainable public libraries in New Zealand now depend on air conditioning system for cooling. This seems completely contradictory to sustainable building initiatives. A sustainable building should be ‘self- sufficient’ and must aim at optimising the use of natural ventilation, wind and daylight and avoiding too much summer heat penetration into the building, to save energy consumption and enhance occupants’ comfort. This paper demonstrates that with appropriate architectural passive design input public libraries do not require air conditioning. Following a brief outline of how our dependence on air conditioning has spread over the full range of building types and climatic zones, this paper focuses on public libraries in temperate climates where passive cooling should be feasible for long periods of mild outside temperature. It was found that current design policies, regulations and guidelines and current building design practices militate passive cooling strategies. Perceived association with prestige, inflexibility of design process, rigid planning regulations and sustainability rating systems were identified as key factors forcing the need for air conditioning. Recommendations are made on how to further encourage development in this direction from the perspective of architectural design. This paper highlights how architectural passive cooling design strategies should be implemented in government initiated policies and regulations to develop a more sustainable public libraries.Keywords: public library, sustainable design, temperate climate, passive cooling, air conditioning
Procedia PDF Downloads 2481459 Analysis of a Movie about Juvenile Delinquency
Authors: Guliz Kolburan
Abstract:
Juvenile delinquency studies has a special place and importance in criminality researches. Young adolescents, have not reached psychological, mental and physical maturity, and they cannot understand their roles and duties in society. In this case, if such an adolescent turns into a crime machine as a gang leader, he has the least responsibility of this result. All institutions, like family, school, community and the state as a whole have duties and responsibilities in this regard. While planning the studies about prevention of juvenile delinquency, all institutions related with the development of the children, should be involved in the center of the study. So that effective goals for prevention studies can be determined only in this way. Most of youth who commit homicide feel no attachment to anybody or society except for themselves. Children who committed homicide generally developed defense mechanisms about their guilt, sadness, fear and anger. For this reason, treatment of these children should be based on the awareness of these feelings and copying with them. In the movie, events making the youth realize his own feelings and responsibilities were studied from a theoretical perspective. In this study, some of the dialogs and the scenes in the movie were analyzed and the factors cause the young gang leader to be drawn to crime were evaluated in terms of the science of psychology. The aim of this study is to analyze the process of the youth to being drawn into criminal behavior in terms of social and emotional developmental phases in a theoretical perspective via the movie produced in 2005 (94. Min.). The method of this study is discourse analysis.Keywords: crime, child, evaluation (development), psychology
Procedia PDF Downloads 4441458 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution
Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda
Abstract:
This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation
Procedia PDF Downloads 1451457 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 1021456 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 1431455 Assessing the Impact of Human Behaviour on Water Resource Systems Performance: A Conceptual Framework
Authors: N. J. Shanono, J. G. Ndiritu
Abstract:
The poor performance of water resource systems (WRS) has been reportedly linked to not only climate variability and the water demand dynamics but also human behaviour-driven unlawful activities. Some of these unlawful activities that have been adversely affecting water sector include unauthorized water abstractions, water wastage behaviour, refusal of water re‐use measures, excessive operational losses, discharging untreated or improperly treated wastewater, over‐application of chemicals by agricultural users and fraudulent WRS operation. Despite advances in WRS planning, operation, and analysis incorporating such undesirable human activities to quantitatively assess their impact on WRS performance remain elusive. This study was then inspired by the need to develop a methodological framework for WRS performance assessment that integrates the impact of human behaviour with WRS performance assessment analysis. We, therefore, proposed a conceptual framework for assessing the impact of human behaviour on WRS performance using the concept of socio-hydrology. The framework identifies and couples four major sources of WRS-related values (water values, water systems, water managers, and water users) using three missing links between human and water in the management of WRS (interactions, outcomes, and feedbacks). The framework is to serve as a database for choosing relevant social and hydrological variables and to understand the intrinsic relations between the selected variables to study a specific human-water problem in the context of WRS management.Keywords: conceptual framework, human behaviour; socio-hydrology; water resource systems
Procedia PDF Downloads 1331454 Using Multiple Strategies to Improve the Nursing Staff Edwards Lifesciences Hemodynamic Monitoring Correctness of Operation
Authors: Hsin-Yi Lo, Huang-Ju Jiun, Yu-Chiao Chu
Abstract:
Hemodynamic monitoring is an important in the intensive care unit. Advances in medical technology in recent years, more diversification of intensive care equipment, there are many kinds of instruments available for monitoring of hemodynamics, Edwards Lifesciences Hemodynamic Monitoring (FloTrac) is one of them. The recent medical safety incidents in parameters were changed, nurses have not to notify doctor in time, therefore, it is hoped to analyze the current problems and find effective improvement strategies. In August 2021, the survey found that only 74.0% of FloTrac correctness of operation, reasons include lack of education, the operation manual is difficulty read, lack of audit mechanism, nurse doesn't know those numerical changes need to notify doctor, work busy omission, unfamiliar with operation and have many nursing records then omissions. Improvement methods include planning professional nurse education, formulate the secret arts of FloTrac, enacting an audit mechanism, establish FloTrac action learning, make「follow the sun」care map, hold simulated training and establish monitoring data automatically upload nursing records. After improvement, FloTrac correctness of operation increased to 98.8%. The results are good, implement to the ICU of the hospital.Keywords: hemodynamic monitoring, edwards lifesciences hemodynamic monitoring, multiple strategies, intensive care
Procedia PDF Downloads 801453 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 2691452 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems
Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele
Abstract:
The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab
Procedia PDF Downloads 1821451 Physicians’ Knowledge and Perception of Gene Profiling in Malaysia: A Pilot Study
Authors: Farahnaz Amini, Woo Yun Kin, Lazwani Kolandaiveloo
Abstract:
Availability of different genetic tests after completion of Human Genome Project increases the physicians’ responsibility to keep themselves update on the potential implementation of these genetic tests in their daily practice. However, due to numbers of barriers, still many of physicians are not either aware of these tests or are not willing to offer or refer their patients for genetic tests. This study was conducted an anonymous, cross-sectional, mailed-based survey to develop a primary data of Malaysian physicians’ level of knowledge and perception of gene profiling. Questionnaire had 29 questions. Total scores on selected questions were used to assess the level of knowledge. The highest possible score was 11. Descriptive statistics, one way ANOVA and chi-squared test was used for statistical analysis. Sixty three completed questionnaires was returned by 27 general practitioners (GPs) and 36 medical specialists. Responders’ age range from 24 to 55 years old (mean 30.2 ± 6.4). About 40% of the participants rated themselves as having poor level of knowledge in genetics in general whilst 60% believed that they have fair level of knowledge. However, almost half (46%) of the respondents felt that they were not knowledgeable about available genetic tests. A majority (94%) of the responders were not aware of any lab or company which is offering gene profiling services in Malaysia. Only 4% of participants were aware of using gene profiling for detection of dosage of some drugs. Respondents perceived greater utility of gene profiling for breast cancer (38%) compared to the colorectal familial cancer (3%). The score of knowledge ranged from 2 to 8 (mean 4.38 ± 1.67). Non-significant differences between score of knowledge of GPs and specialists were observed, with score of 4.19 and 4.58 respectively. There was no significant association between any demographic factors and level of knowledge. However, those who graduated between years 2001 to 2005 had higher level of knowledge. Overall, 83% of participants showed relatively high level of perception on value of gene profiling to detect patient’s risk of disease. However, low perception was observed for both statements of using gene profiling for general population in order to alter their lifestyle (25%) as well as having the full sequence of a patient genome for the purpose of determining a patient’s best match for treatment (18%). The lack of clinical guidelines, limited provider knowledge and awareness, lack of time and resources to educate patients, lack of evidence-based clinical information and cost of tests were the most barriers of ordering gene profiling mentioned by physicians. In conclusion Malaysian physicians who participate in this study had mediocre level of knowledge and awareness in gene profiling. The low exposure to the genetic questions and problems might be a key predictor of lack of awareness and knowledge on available genetic tests. Educational and training workshop might be useful in helping Malaysian physicians incorporate genetic profiling into practice for eligible patients.Keywords: gene profiling, knowledge, Malaysia, physician
Procedia PDF Downloads 3241450 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 1671449 Campus Signage and Wayfinding Design Guidelines: Challenges of Visual Literacy in University of Port Harcourt
Authors: Kasi Jockeil-Ojike
Abstract:
The study of signage and wayfinding design guidelines is to provide consistent, coherent, and comprehensive guidelines for all type of signage design that may be applied to guide persons from the freeway into campus, and to specific building. As the world becomes more complex and the population increases, people increasingly rely on signage and wayfinding systems to navigate their way in built environment such as university campus. This paper will demonstrate and discuss signage and wayfinding, and the importance of visual literacy in university campuses. It discusses the process of wayfinding and signage, how poor signage and wayfinding systems affect people when navigating, and why wayfinding is more than just signage. Hence, this paper tries to examine the design guideline that primarily addresses the signage and wayfinding system that improves visual literacy within University of Port Harcourt multi-campuses. In doing this, the paper explore the environmental graphic design senori-emotional values and communicative information theories that takes the subjectivity of the observer in account. By making these connections, the paper will also determine what University of Port Harcourt need to focus on to be counted in the global trends, using developed visual communication guidelines based on previous studies or concept from professional. In conclusion, information about why physical structures (buildings and waypaths) on University of Port Harcourt multiple campuses need to be branded in self-communicative manner using signage and wayfinding design as integral part of its physical planning policy is recommended.Keywords: campus-signage, movement, visual-literacy, wayfinding-guidelines
Procedia PDF Downloads 4491448 Time of Death Determination in Medicolegal Death Investigations
Authors: Michelle Rippy
Abstract:
Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic
Procedia PDF Downloads 1181447 Cell-free Bioconversion of n-Octane to n-Octanol via a Heterogeneous and Bio-Catalytic Approach
Authors: Shanna Swart, Caryn Fenner, Athanasios Kotsiopoulos, Susan Harrison
Abstract:
Linear alkanes are produced as by-products from the increasing use of gas-to-liquid fuel technologies for synthetic fuel production and offer great potential for value addition. Their current use as low-value fuels and solvents do not maximize this potential. Therefore, attention has been drawn towards direct activation of these aliphatic alkanes to more useful products such as alcohols, aldehydes, carboxylic acids and derivatives. Cytochrome P450 monooxygenases (P450s) can be used for activation of these aliphatic alkanes using whole-cells or cell-free systems. Some limitations of whole-cell systems include reduced mass transfer, stability and possible side reactions. Since the P450 systems are little studied as cell-free systems, they form the focus of this study. Challenges of a cell-free system include co-factor regeneration, substrate availability and enzyme stability. Enzyme immobilization offers a positive outlook on this dilemma, as it may enhance stability of the enzyme. In the present study, 2 different P450s (CYP153A6 and CYP102A1) as well as the relevant accessory enzymes required for electron transfer (ferredoxin and ferredoxin reductase) and co-factor regeneration (glucose dehydrogenase) have been expressed in E. coli and purified by metal affinity chromatography. Glucose dehydrogenase (GDH), was used as a model enzyme to assess the potential of various enzyme immobilization strategies including; surface attachment on MagReSyn® microspheres with various functionalities and on electrospun nanofibers, using self-assembly based methods forming Cross Linked Enzymes (CLE), Cross Linked Enzyme Aggregates (CLEAs) and spherezymes as well as in a sol gel. The nanofibers were synthesized by electrospinning, which required the building of an electrospinning machine. The nanofiber morphology has been analyzed by SEM and binding will be further verified by FT-IR. Covalent attachment based methods showed limitations where only ferredoxin reductase and GDH retained activity after immobilization which were largely attributed to insufficient electron transfer and inactivation caused by the crosslinkers (60% and 90% relative activity loss for the free enzyme when using 0.5% glutaraldehyde and glutaraldehyde/ethylenediamine (1:1 v/v), respectively). So far, initial experiments with GDH have shown the most potential when immobilized via their His-tag onto the surface of MagReSyn® microspheres functionalized with Ni-NTA. It was found that Crude GDH could be simultaneously purified and immobilized with sufficient activity retention. Immobilized pure and crude GDH could be recycled 9 and 10 times, respectively, with approximately 10% activity remaining. The immobilized GDH was also more stable than the free enzyme after storage for 14 days at 4˚C. This immobilization strategy will also be applied to the P450s and optimized with regards to enzyme loading and immobilization time, as well as characterized and compared with the free enzymes. It is anticipated that the proposed immobilization set-up will offer enhanced enzyme stability (as well as reusability and easy recovery), minimal mass transfer limitation, with continuous co-factor regeneration and minimal enzyme leaching. All of which provide a positive outlook on this robust multi-enzyme system for efficient activation of linear alkanes as well as the potential for immobilization of various multiple enzymes, including multimeric enzymes for different bio-catalytic applications beyond alkane activation.Keywords: alkane activation, cytochrome P450 monooxygenase, enzyme catalysis, enzyme immobilization
Procedia PDF Downloads 2251446 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape
Authors: Moschos Vogiatzis, K. Perakis
Abstract:
Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.Keywords: classification, land use/land cover, mapping, random forest
Procedia PDF Downloads 1231445 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 2441444 Disaster Adaptation Mechanism and Disaster Prevention Adaptation Planning Strategies for Industrial Parks in Response to Climate Change and Different Socio-Economic Disasters
Authors: Jen-Te Pai, Jao-Heng Liu, Shin-En Pai
Abstract:
The impact of climate change has intensified in recent years, causing Taiwan to face higher frequency and serious natural disasters. Therefore, it is imperative for industrial parks manufacturers to promote adaptation policies in response to climate change. On the other hand, with the rise of the international anti-terrorism situation, once a terrorist attack occurs, it will attract domestic and international media attention, especially the strategic and economic status of the science park. Thus, it is necessary to formulate adaptation and mitigation strategies under climate change and social economic disasters. After reviewed the literature about climate change, urban disaster prevention, vulnerability assessment, and risk communication, the study selected 62 industrial parks compiled by the Industrial Bureau of the Ministry of Economic Affairs of Taiwan as the research object. This study explored the vulnerability and disaster prevention and disaster relief functional assessment of these industrial parks facing of natural and socio-economic disasters. Furthermore, this study explored planned adaptation of industrial parks management section and autonomous adaptation of corporate institutions in the park. The conclusion of this study is that Taiwan industrial parks with a higher vulnerability to natural and socio-economic disasters should employ positive adaptive behaviours.Keywords: adaptive behaviours, analytic network process, vulnerability, industrial parks
Procedia PDF Downloads 1431443 Carbon Emission Reduction by Compact City Construction in Toyama, Japan
Authors: Benyan Jiang, Dawei Xia, Yong Li
Abstract:
Compact city construction is considered as an effective measure to reduce carbon emission in city lives. Toyama City started its compact city strategy in 2000 and was selected as a Japanese Environmental Model City in 2008 for its achievement. This paper takes Toyama as a study case, aiming to find how city polices affected people’s life styles and reduced carbon emission. The main materials used in this study are first-hand documents, like urban planning materials, government annual report and statistic data from transportation association. It is found that the main measures taken by Toyama City include the construction of light rail transit, increasing the frequency of buses, building park and ride parking lots. In addition to hardware facilities, it also offers flexible policies like passengers' coupons for the senior citizens and free use of parking lots by buying shopping vouchers. Besides, Toyama City encourages citizens to live within 500 meters of public transportation. People who buy an apartment near public transportation will receive 500,000 Japanese Yen. These measures have proven to their effects. Compared with 2005, in 2014, the transportation sector reduced emissions of 2.35 million tons of CO₂, 13.6%. This aspect is related to the increase in the number of cars in public transport and also related to fuel improvement.Keywords: Toyama, compact city, public transportation, CO₂ reduction
Procedia PDF Downloads 1421442 An Application of Contingent Valuation Method in Valuing Protected Area: A Case Study of Pulau Kukup National Parks
Authors: A. Mukrimah, M. Mohd Parid, H. F. Lim
Abstract:
Wetland ecosystem has valuable resources that contribute to national income generation and public well-being, either directly by resources that have a market value or indirectly by resources that have no market value. Economic approach is used to evaluate the resources to determine the best use of wetland resources and should be emphasized in policy development planning. This approach is to prevent imbalance in the allocation of resources and welfare benefits. A case study was conducted in 2016 to assess the economic value of wetland ecosystem services at Pulau Kukup National Parks (PKNP). This study has applied dichotomous choice survey design Contingent Valuation Method (CVM) to investigate empirically the willingness-to-pay (WTP) by the public. The study interviewed 400 household respondents at Pontian, Johor. Analysis showed 81% of household interviewed were willing to contribute to the Wetland Conservation Trust Fund. The results also indicated that on average a household was willing to pay RM87 annually. By taking into account 21,664 households in Pontian district in 2016, public’s contribution to conserves wetland ecosystem at PKNP was calculated to be RM1, 884,334. From the public’s interest to contribute to the conservation of wetland ecosystem services at PKNP, it indicates that more concerted effort is needed by both the federal and state governments to conserve and rehabilitate the mangrove ecosystem in Malaysia.Keywords: environmental economy, economic valuation, choice experiment, Pulau Kukup national parks
Procedia PDF Downloads 1901441 Energy Efficient Plant Design Approaches: Case Study of the Sample Building of the Energy Efficiency Training Facilities
Authors: Idil Kanter Otcu
Abstract:
Nowadays, due to the growing problems of energy supply and the drastic reduction of natural non-renewable resources, the development of new applications in the energy sector and steps towards greater efficiency in energy consumption are required. Since buildings account for a large share of energy consumption, increasing the structural density of buildings causes an increase in energy consumption. This increase in energy consumption means that energy efficiency approaches to building design and the integration of new systems using emerging technologies become necessary in order to curb this consumption. As new systems for productive usage of generated energy are developed, buildings that require less energy to operate, with rational use of resources, need to be developed. One solution for reducing the energy requirements of buildings is through landscape planning, design and application. Requirements such as heating, cooling and lighting can be met with lower energy consumption through planting design, which can help to achieve more efficient and rational use of resources. Within this context, rather than a planting design which considers only the ecological and aesthetic features of plants, these considerations should also extend to spatial organization whereby the relationship between the site and open spaces in the context of climatic elements and planting designs are taken into account. In this way, the planting design can serve an additional purpose. In this study, a landscape design which takes into consideration location, local climate morphology and solar angle will be illustrated on a sample building project.Keywords: energy efficiency, landscape design, plant design, xeriscape landscape
Procedia PDF Downloads 2571440 Reducing Hazardous Materials Releases from Railroad Freights through Dynamic Trip Plan Policy
Authors: Omar A. Abuobidalla, Mingyuan Chen, Satyaveer S. Chauhan
Abstract:
Railroad transportation of hazardous materials freights is important to the North America economics that supports the national’s supply chain. This paper introduces various extensions of the dynamic hazardous materials trip plan problems. The problem captures most of the operational features of a real-world railroad transportations systems that dynamically initiates a set of blocks and assigns each shipment to a single block path or multiple block paths. The dynamic hazardous materials trip plan policies have distinguishing features that are integrating the blocking plan, and the block activation decisions. We also present a non-linear mixed integer programming formulation for each variant and present managerial insights based on a hypothetical railroad network. The computation results reveal that the dynamic car scheduling policies are not only able to take advantage of the capacity of the network but also capable of diminishing the population, and environment risks by rerouting the active blocks along the least risky train services without sacrificing the cost advantage of the railroad. The empirical results of this research illustrate that the issue of integrating the blocking plan, and the train makeup of the hazardous materials freights must receive closer attentions.Keywords: dynamic car scheduling, planning and scheduling hazardous materials freights, airborne hazardous materials, gaussian plume model, integrated blocking and routing plans, box model
Procedia PDF Downloads 2041439 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves
Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare
Abstract:
The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve
Procedia PDF Downloads 421438 Julia-Based Computational Tool for Composite System Reliability Assessment
Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris
Abstract:
The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow
Procedia PDF Downloads 72