Search results for: visual simulation
838 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs
Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter
Abstract:
The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.Keywords: initial water saturation, permeability, porosity, surfactant EOR
Procedia PDF Downloads 159837 Effect of Thermal Radiation and Chemical Reaction on MHD Flow of Blood in Stretching Permeable Vessel
Authors: Binyam Teferi
Abstract:
In this paper, a theoretical analysis of blood flow in the presence of thermal radiation and chemical reaction under the influence of time dependent magnetic field intensity has been studied. The unsteady non linear partial differential equations of blood flow considers time dependent stretching velocity, the energy equation also accounts time dependent temperature of vessel wall, and concentration equation includes time dependent blood concentration. The governing non linear partial differential equations of motion, energy, and concentration are converted into ordinary differential equations using similarity transformations solved numerically by applying ode45. MATLAB code is used to analyze theoretical facts. The effect of physical parameters viz., permeability parameter, unsteadiness parameter, Prandtl number, Hartmann number, thermal radiation parameter, chemical reaction parameter, and Schmidt number on flow variables viz., velocity of blood flow in the vessel, temperature and concentration of blood has been analyzed and discussed graphically. From the simulation study, the following important results are obtained: velocity of blood flow increases with both increment of permeability and unsteadiness parameter. Temperature of the blood increases in vessel wall as Prandtl number and Hartmann number increases. Concentration of the blood decreases as time dependent chemical reaction parameter and Schmidt number increases.Keywords: stretching velocity, similarity transformations, time dependent magnetic field intensity, thermal radiation, chemical reaction
Procedia PDF Downloads 90836 Malaysian ESL Writing Process: A Comparison with England’s
Authors: Henry Nicholas Lee, George Thomas, Juliana Johari, Carmilla Freddie, Caroline Val Madin
Abstract:
Research in comparative and international education often provides value-laden views of an education system within and in between other countries. These views are frequently used by policy makers or educators to explore similarities and differences for, among others, benchmarking purposes. In this study, a comparison is made between Malaysia and England, focusing on the process of writing children went through to create a text, using a multimodal theoretical framework to analyse this comparison. The main purpose is political in nature as it served as an answer to Malaysia’s call for benchmarking of best practices for language learning. Furthermore, the focus on writing in this study adds into more empirical findings about early writers’ writing development and writing improvement, especially for children at the ages of 5-9. In research, comparative studies in English as a Second Language (ESL) writing pedagogy – particularly in Malaysia since the introduction of the Standard- based English Language Curriculum (KSSR) in 2011 as a draft and its full implementation in 2017; reviewed 2018 KSSR-CEFR aligned – has not been done comparatively. In theory, a multimodal theoretical framework somehow allows a logical comparison between first language and ESL which would provide useful insights to illuminate the writing process between Malaysia and England. The comparisons are not representative because of the different school systems in both countries. So far, the literature informs us that the curriculum for language learning is very much emphasised on children’s linguistic abilities, which include their proficiency and mastery of the language, its conventions, and technicalities. However, recent empirical findings suggested that literacy in its concepts and characters need change. In view of this suggestion, the comparison will look at how the process of writing is implemented through the five modes of communication: linguistic, visual, aural, spatial, and gestural. This project draws on data from Malaysia and England, involving 10 teachers, 26 classroom observations, 20 lesson plans, 20 interviews, and 20 brief conversations with teachers. The research focused upon 20 primary children of different genders aged 5-9, and in addition to primary data descriptions, 40 children’s works, 40 brief classroom conversations, 30 classroom photographs, and 30 school compound photographs were undertaken to investigate teachers and children’s use of modes and semiotic resources to design a text. The data were analysed by means of within-case analysis, cross-case analysis, and constant comparative analysis, with an initial stage of data categorisation, followed by general and specific coding, which clustered the data into thematic groups. The study highlights the importance of teachers’ and children’s engagement and interaction with various modes of communication, an adaptation from the English approaches to teaching writing within the KSSR framework and providing ‘voice’ to ESL writers to ensure that both have access to the knowledge and skills required to make decisions in developing multimodal texts and artefacts.Keywords: comparative education, early writers, KSSR, multimodal theoretical framework, writing development
Procedia PDF Downloads 68835 Optimization of Solar Rankine Cycle by Exergy Analysis and Genetic Algorithm
Authors: R. Akbari, M. A. Ehyaei, R. Shahi Shavvon
Abstract:
Nowadays, solar energy is used for energy purposes such as the use of thermal energy for domestic, industrial and power applications, as well as the conversion of the sunlight into electricity by photovoltaic cells. In this study, the thermodynamic simulation of the solar Rankin cycle with phase change material (paraffin) was first studied. Then energy and exergy analyses were performed. For optimization, a single and multi-objective genetic optimization algorithm to maximize thermal and exergy efficiency was used. The parameters discussed in this paper included the effects of input pressure on turbines, input mass flow to turbines, the surface of converters and collector angles on thermal and exergy efficiency. In the organic Rankin cycle, where solar energy is used as input energy, the fluid selection is considered as a necessary factor to achieve reliable and efficient operation. Therefore, silicon oil is selected for a high-temperature cycle and water for a low-temperature cycle as an operating fluid. The results showed that increasing the mass flow to turbines 1 and 2 would increase thermal efficiency, while it reduces and increases the exergy efficiency in turbines 1 and 2, respectively. Increasing the inlet pressure to the turbine 1 decreases the thermal and exergy efficiency, and increasing the inlet pressure to the turbine 2 increases the thermal efficiency and exergy efficiency. Also, increasing the angle of the collector increased thermal efficiency and exergy. The thermal efficiency of the system was 22.3% which improves to 33.2 and 27.2% in single-objective and multi-objective optimization, respectively. Also, the exergy efficiency of the system was 1.33% which has been improved to 1.719 and 1.529% in single-objective and multi-objective optimization, respectively. These results showed that the thermal and exergy efficiency in a single-objective optimization is greater than the multi-objective optimization.Keywords: exergy analysis, genetic algorithm, rankine cycle, single and multi-objective function
Procedia PDF Downloads 146834 Energy Retrofitting Application Research to Achieve Energy Efficiency in Hot-Arid Climates in Residential Buildings: A Case Study of Saudi Arabia
Authors: A. Felimban, A. Prieto, U. Knaack, T. Klein
Abstract:
This study aims to present an overview of recent research in building energy-retrofitting strategy applications and analyzing them within the context of hot arid climate regions which is in this case study represented by the Kingdom of Saudi Arabia. The main goal of this research is to do an analytical study of recent research approaches to show where the primary gap in knowledge exists and outline which possible strategies are available that can be applied in future research. Also, the paper focuses on energy retrofitting strategies at a building envelop level. The study is limited to specific measures within the hot arid climate region. Scientific articles were carefully chosen as they met the expression criteria, such as retrofitting, energy-retrofitting, hot-arid, energy efficiency, residential buildings, which helped narrow the research scope. Then the papers were explored through descriptive analysis and justified results within the Saudi context in order to draw an overview of future opportunities from the field of study for the last two decades. The conclusions of the analysis of the recent research confirmed that the field of study had a research shortage on investigating actual applications and testing of newly introduced energy efficiency applications, lack of energy cost feasibility studies and there was also a lack of public awareness. In terms of research methods, it was found that simulation software was a major instrument used in energy retrofitting application research. The main knowledge gaps that were identified included the need for certain research regarding actual application testing; energy retrofitting strategies application feasibility; the lack of research on the importance of how strategies apply first followed by the user acceptance of developed scenarios.Keywords: energy efficiency, energy retrofitting, hot arid, Saudi Arabia
Procedia PDF Downloads 122833 Integer Programming: Domain Transformation in Nurse Scheduling Problem.
Authors: Geetha Baskaran, Andrzej Barjiela, Rong Qu
Abstract:
Motivation: Nurse scheduling is a complex combinatorial optimization problem. It is also known as NP-hard. It needs an efficient re-scheduling to minimize some trade-off of the measures of violation by reducing selected constraints to soft constraints with measurements of their violations. Problem Statement: In this paper, we extend our novel approach to solve the nurse scheduling problem by transforming it through Information Granulation. Approach: This approach satisfies the rules of a typical hospital environment based on a standard benchmark problem. Generating good work schedules has a great influence on nurses' working conditions which are strongly related to the level of a quality health care. Domain transformation that combines the strengths of operation research and artificial intelligence was proposed for the solution of the problem. Compared to conventional methods, our approach involves judicious grouping (information granulation) of shifts types’ that transforms the original problem into a smaller solution domain. Later these schedules from the smaller problem domain are converted back into the original problem domain by taking into account the constraints that could not be represented in the smaller domain. An Integer Programming (IP) package is used to solve the transformed scheduling problem by expending the branch and bound algorithm. We have used the GNU Octave for Windows to solve this problem. Results: The scheduling problem has been solved in the proposed formalism resulting in a high quality schedule. Conclusion: Domain transformation represents departure from a conventional one-shift-at-a-time scheduling approach. It offers an advantage of efficient and easily understandable solutions as well as offering deterministic reproducibility of the results. We note, however, that it does not guarantee the global optimum.Keywords: domain transformation, nurse scheduling, information granulation, artificial intelligence, simulation
Procedia PDF Downloads 395832 Fluidised Bed Gasification of Multiple Agricultural Biomass-Derived Briquettes
Authors: Rukayya Ibrahim Muazu, Aiduan Li Borrion, Julia A. Stegemann
Abstract:
Biomass briquette gasification is regarded as a promising route for efficient briquette use in energy generation, fuels and other useful chemicals, however, previous research work has focused on briquette gasification in fixed bed gasifiers such as updraft and downdraft gasifiers. Fluidised bed gasifier has the potential to be effectively sized for medium or large scale. This study investigated the use of fuel briquettes produced from blends of rice husks and corn cobs biomass residues, in a bubbling fluidised bed gasifier. The study adopted a combination of numerical equations and Aspen Plus simulation software to predict the product gas (syngas) composition based on briquette's density and biomass composition (blend ratio of rice husks to corn cobs). The Aspen Plus model was based on an experimentally validated model from the literature. The results based on a briquette size of 32 mm diameter and relaxed density range of 500 to 650 kg/m3 indicated that fluidisation air required in the gasifier increased with an increase in briquette density, and the fluidisation air showed to be the controlling factor compared with the actual air required for gasification of the biomass briquettes. The mass flowrate of CO2 in the predicted syngas composition, increased with an increase in the air flow rate, while CO production decreased and H2 was almost constant. The H2/CO ratio for various blends of rice husks and corn cobs did not significantly change at the designed process air, but a significant difference of 1.0 for H2/CO ratio was observed at higher air flow rate, and between 10/90 to 90/10 blend ratio of rice husks to corn cobs. This implies the need for further understanding of biomass variability and hydrodynamic parameters on syngas composition in biomass briquette gasification.Keywords: aspen plus, briquettes, fluidised bed, gasification, syngas
Procedia PDF Downloads 455831 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape
Authors: Lia Marchi, Ernesto Antonini
Abstract:
The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory
Procedia PDF Downloads 111830 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity
Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz
Abstract:
The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance
Procedia PDF Downloads 108829 Reading Comprehension in Profound Deaf Readers
Authors: S. Raghibdoust, E. Kamari
Abstract:
Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences
Procedia PDF Downloads 336828 Numerical Evaluation of Deep Ground Settlement Induced by Groundwater Changes During Pumping and Recovery Test in Shanghai
Authors: Shuo Wang
Abstract:
The hydrogeological parameters of the engineering site and the hydraulic connection between the aquifers can be obtained by the pumping test. Through the recovery test, the characteristics of water level recovery and the law of surface subsidence recovery can be understood. The above two tests can provide the basis for subsequent engineering design. At present, the deformation of deep soil caused by pumping tests is often neglected. However, some studies have shown that the maximum settlement subject to groundwater drawdown is not necessarily on the surface but in the deep soil. In addition, the law of settlement recovery of each soil layer subject to water level recovery is not clear. If the deformation-sensitive structure is deep in the test site, safety accidents may occur. In this study, the pumping test and recovery test of a confined aquifer in Shanghai are introduced. The law of measured groundwater changes and surface subsidence are analyzed. In addition, the fluid-solid coupling model was established by ABAQUS based on the Biot consolidation theory. The models are verified by comparing the computed and measured results. Further, the variation law of water level and the deformation law of deep soil during pumping and recovery tests under different site conditions and different times and spaces are discussed through the above model. It is found that the maximum soil settlement caused by pumping in a confined aquifer is related to the permeability of the overlying aquitard and pumping time. There is a lag between soil deformation and groundwater changes, and the recovery rate of settlement deformation of each soil layer caused by the rise of water level is different. Finally, some possible research directions are proposed to provide new ideas for academic research in this field.Keywords: coupled hydro-mechanical analysis, deep ground settlement, numerical simulation, pumping test, recovery test
Procedia PDF Downloads 43827 Communicating Safety: A Digital Ethnography Investigating Social Media Use for Workplace Safety
Authors: Kelly Jaunzems
Abstract:
Social media is a powerful instrument of communication, enabling the presentation of information in multiple forms and modes, amplifying the interactions between people, organisations, and stakeholders, and increasing the range of communication channels available. Younger generations are highly engaged with social media and more likely to use this channel than any other to seek information. Given this, it may appear extraordinary that occupational safety and health professionals have yet to seriously engage with social media for communicating safety messages to younger audiences who, in many industries, might be statistically more likely to encounter more workplace harm or injury. Millennials, defined as those born between 1981-2000, have distinctive characteristics that also impact their interaction patterns rendering many traditional occupational safety and health communication channels sub-optimal or near obsolete. Used to immediate responses, 280-character communication, shares, likes, and visual imagery, millennials struggle to take seriously the low-tech, top-down communication channels such as safety noticeboards, toolbox meetings, and passive tick-box online inductions favoured by traditional OSH professionals. This paper draws upon well-established communication findings, which argue that it is important to know a target audience and reach them using their preferred communication pathways, particularly if the aim is to impact attitudes and behaviours. Health practitioners have adopted social media as a communication channel with great success, yet safety practitioners have failed to follow this lead. Using a digital ethnography approach, this paper examines seven organisations’ Facebook posts from two one-month periods one year apart, one in 2018 and one in 2019. Each of the years informs organisation-based case studies. Comparing, contrasting, and drawing upon these case studies, the paper discusses and evaluates the (non) use of social media communication of safety information in terms of user engagement, shareability, and overall appeal. The success of health practitioners’ use of social media provides a compelling template for the implementation of social media into organisations’ safety communication strategies. Highly visible content such as that found on social media allows an organization to become more responsive and engage in two-way conversations with their audience, creating more engaged and participatory conversations around safety. Further, using social media to address younger audiences with a range of tonal qualities (for example, the use of humour) can achieve cut through in a way that grim statistics fail to do. On the basis of 18 months of interviews, filed work, and data analysis, the paper concludes with recommendations for communicating safety information via social media. It proposes exploration of the social media communication formula that, when utilised by safety practitioners, may create an effective social media presence. It is anticipated that such social media use will increase engagement, expand the number of followers and reduce the likelihood and severity of safety-related incidents. The tools offered may provide a path for safety practitioners to reach a disengaged generation of workers to build a cohesive and inclusive conversation around ways to keep people safe at work.Keywords: social media, workplace safety, communication strategies, young workers
Procedia PDF Downloads 113826 Life Cycle Assessment of Residential Buildings: A Case Study in Canada
Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq
Abstract:
Residential buildings consume significant amounts of energy and produce a large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH is found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.Keywords: building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings
Procedia PDF Downloads 472825 Development of a Miniature Laboratory Lactic Goat Cheese Model to Study the Expression of Spoilage by Pseudomonas Spp. In Cheeses
Authors: Abirami Baleswaran, Christel Couderc, Loubnah Belahcen, Jean Dayde, Hélène Tormo, Gwénaëlle Jard
Abstract:
Cheeses are often reported to be spoiled by Pseudomonas spp., responsible for defects in appearance, texture, taste, and smell, leading to their non-marketing and even their destruction. Despite preventive actions, problems linked to Pseudomonas spp. are difficult to control by the lack of knowledge and control of these contaminants during the cheese manufacturing. Lactic goat cheese producers are not spared by this problem and are looking for solutions to decrease the number of spoiled cheeses. To explore different hypotheses, experiments are needed. However, cheese-making experiments at the pilot scale are expensive and time consuming. Thus, there is a real need to develop a miniature cheeses model system under controlled conditions. In a previous study, several miniature cheese models corresponding to different type of commercial cheeses have been developed for different purposes. The models were, for example, used to study the influence of milk, starters cultures, pathogen inhibiting additives, enzymatic reactions, microflora, freezing process on cheese. Nevertheless, no miniature model was described on the lactic goat cheese. The aim of this work was to develop a miniature cheese model system under controlled laboratory conditions which resembles commercial lactic goat cheese to study Pseudomonas spp. spoilage during the manufacturing and ripening process. First, a protocol for the preparation of miniature cheeses (3.5 times smaller than a commercial one) was designed based on the cheese factorymanufacturing process. The process was adapted from “Rocamadour” technology and involves maturation of pasteurized milk, coagulation, removal of whey by centrifugation, moulding, and ripening in a little scale cellar. Microbiological (total bacterial count, yeast, molds) and physicochemical (pH, saltinmoisture, moisture in fat-free)analyses were performed on four key stages of the process (before salting, after salting, 1st day of ripening, and end of ripening). Factory and miniature cheeses volatilomewere also obtained after full scan Sift-MS cheese analysis. Then, Pseudomonas spp. strains isolated from contaminated cheeses were selected on their origin, their ability to produce pigments, and their enzymatic activities (proteolytic, lecithinasic, and lipolytic). Factory and miniature curds were inoculated by spotting selected strains on the cheese surface. The expression of cheese spoilage was evaluated by counting the level of Pseudomonas spp. during the ripening and by visual observation and under UVlamp. The physicochemical and microbiological compositions of miniature cheeses permitted to assess that miniature process resembles factory process. As expected, differences involatilomes were observed, probably due to the fact that miniature cheeses are made usingpasteurized milk to better control the microbiological conditions and also because the little format of cheese induced probably a difference during the ripening even if the humidity and temperature in the cellar were quite similar. The spoilage expression of Pseudomonas spp. was observed in miniature and factory cheeses. It confirms that the proposed model is suitable for the preparation of miniature cheese specimens in the spoilage study of Pseudomonas spp. in lactic cheeses. This kind of model could be deployed for other applications and other type of cheese.Keywords: cheese, miniature, model, pseudomonas spp, spoilage
Procedia PDF Downloads 132824 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 14823 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 359822 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network
Authors: Z. Abdollahi Biron, P. Pisu
Abstract:
Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon
Procedia PDF Downloads 237821 Golden Dawn's Rhetoric on Social Networks: Populism, Xenophobia and Antisemitism
Authors: Georgios Samaras
Abstract:
New media such as Facebook, YouTube and Twitter introduced the world to a new era of instant communication. An era where online interactions could replace a lot of offline actions. Technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, and many-to-many) both synchronously and asynchronously and participate in reciprocal message exchanges. Currently, social networks are attracting similar academic attention to that of the internet after its mainstream implementation into public life. Websites and platforms are seen as the forefront of a new political change. There is a significant backdrop of previous methodologies employed to research the effects of social networks. New approaches are being developed to be able to adapt to the growth of social networks and the invention of new platforms. Golden Dawn was the first openly neo-Nazi party post World War II to win seats in the parliament of a European country. Its racist rhetoric and violent tactics on social networks were rewarded by their supporters, who in the face of Golden Dawn’s leaders saw a ‘new dawn’ in Greek politics. Mainstream media banned its leaders and members of the party indefinitely after Ilias Kasidiaris attacked Liana Kanelli, a member of the Greek Communist Party, on live television. This media ban was seen as a treasonous move by a significant percentage of voters, who believed that the system was desperately trying to censor Golden Dawn to favor mainstream parties. The shocking attack on live television received international coverage and while European countries were condemning this newly emerged neo-Nazi rhetoric, almost 7 percent of the Greek population rewarded Golden Dawn with 18 seats in the Greek parliament. Many seem to think that Golden Dawn mobilised its voters online and this approach played a significant role in spreading their message and appealing to wider audiences. No strict online censorship existed back in 2012 and although Golden Dawn was openly used neo-Nazi symbolism, it was allowed to use social networks without serious restrictions until 2017. This paper used qualitative methods to investigate Golden Dawn’s rise in social networks from 2012 to 2019. The focus of the content analysis was set on three social networking platforms: Facebook, Twitter and YouTube, while the existence of Golden Dawn’s website, which was used as a news sharing hub, was also taken into account. The content analysis included text and visual analyses that sampled content from their social networking pages to translate their political messaging through an ideological lens focused on extreme-right populism. The absence of hate speech regulations on social network platforms in 2012 allowed the free expression of those heavily ultranationalist and populist views, as they were employed by Golden Dawn in the Greek political scene. On YouTube, Facebook and Twitter, the influence of their rhetoric was particularly strong. Official channels and MPs profiles were investigated to explore the messaging in-depth and understand its ideological elements.Keywords: populism, far-right, social media, Greece, golden dawn
Procedia PDF Downloads 147820 A Case Report on Cognitive-Communication Intervention in Traumatic Brain Injury
Authors: Nikitha Francis, Anjana Hoode, Vinitha George, Jayashree S. Bhat
Abstract:
The interaction between cognition and language, referred as cognitive-communication, is very intricate, involving several mental processes such as perception, memory, attention, lexical retrieval, decision making, motor planning, self-monitoring and knowledge. Cognitive-communication disorders are difficulties in communicative competencies that result from underlying cognitive impairments of attention, memory, organization, information processing, problem solving, and executive functions. Traumatic brain injury (TBI) is an acquired, non - progressive condition, resulting in distinct deficits of cognitive communication abilities such as naming, word-finding, self-monitoring, auditory recognition, attention, perception and memory. Cognitive-communication intervention in TBI is individualized, in order to enhance the person’s ability to process and interpret information for better functioning in their family and community life. The present case report illustrates the cognitive-communicative behaviors and the intervention outcomes of an adult with TBI, who was brought to the Department of Audiology and Speech Language Pathology, with cognitive and communicative disturbances, consequent to road traffic accident. On a detailed assessment, she showed naming deficits along with perseverations and had severe difficulty in recalling the details of the accident, her house address, places she had visited earlier, names of people known to her, as well as the activities she did each day, leading to severe breakdowns in her communicative abilities. She had difficulty in initiating, maintaining and following a conversation. She also lacked orientation to time and place. On administration of the Manipal Manual of Cognitive Linguistic Abilities (MMCLA), she exhibited poor performance on tasks related to visual and auditory perception, short term memory, working memory and executive functions. She attended 20 sessions of cognitive-communication intervention which followed a domain-general, adaptive training paradigm, with tasks relevant to everyday cognitive-communication skills. Compensatory strategies such as maintaining a dairy with reminders of her daily routine, names of people, date, time and place was also recommended. MMCLA was re-administered and her performance in the tasks showed significant improvements. Occurrence of perseverations and word retrieval difficulties reduced. She developed interests to initiate her day-to-day activities at home independently, as well as involve herself in conversations with her family members. Though she lacked awareness about her deficits, she actively involved herself in all the therapy activities. Rehabilitation of moderate to severe head injury patients can be done effectively through a holistic cognitive retraining with a focus on different cognitive-linguistic domains. Selection of goals and activities should have relevance to the functional needs of each individual with TBI, as highlighted in the present case report.Keywords: cognitive-communication, executive functions, memory, traumatic brain injury
Procedia PDF Downloads 346819 Effects of Inlet Filtration Pressure Loss on Single and Two-Spool Gas Turbine
Authors: Enyia James Diwa, Dodeye Ina Igbong, Archibong Archibong Eso
Abstract:
Gas turbine operators have been faced with the dramatic financial setback resulting from compressor fouling. In a highly deregulated power industry where there is stiffness in the market competition, has made it imperative to improvise means of reducing maintenance cost in other to yield maximum profit. Compressor fouling results from the deposition of contaminants in the presence of oil and moisture on the compressor blade or annulus surfaces, which leads to a loss in flow capacity and compressor efficiency. These combined effects reduce power output, increase heat rate and cause creep life reduction. This paper also contains a model of two gas turbine engines via Cranfield University software known as TURBOMATCH, which is simulation software for detecting engine fouling rate. The model engines are of different configurations and capacities, and are operating in two different modes of constant output power and turbine inlet temperature for a two and three stage filter system. The idea is to investigate the more economically viable filtration systems by gas turbine users based on performance only. It has been demonstrated in the results that the two spool engine is a little more beneficial compared to the single spool. This is as a result of a higher pressure ratio of the two spools as well as the deceleration of the high-pressure compressor and high-pressure turbine speed in a constant TET. Meanwhile, the inlet filtration system was properly designed and balanced with a well-timed and economical compressor washing regime/scheme to control compressor fouling. The different technologies of inlet air filtration and compressor washing are considered and an attempt at optimization with respect to the cost of a combination of both control measures are made.Keywords: inlet filtration, pressure loss, single spool, two spool
Procedia PDF Downloads 320818 Navigating Complex Communication Dynamics in Qualitative Research
Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis
Abstract:
This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.Keywords: American Sign Language, complex communication, deaf-disabled, methodology
Procedia PDF Downloads 116817 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 289816 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting
Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey
Abstract:
Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method
Procedia PDF Downloads 78815 Numerical Simulation of Axially Loaded to Failure Large Diameter Bored Pile
Authors: M. Ezzat, Y. Zaghloul, T. Sorour, A. Hefny, M. Eid
Abstract:
Ultimate capacity of large diameter bored piles is usually determined from pile loading tests as recommended by several international codes and foundation design standards. However, loading of this type of piles till achieving apparent failure is practically seldom. In this paper, numerical analyses are carried out to simulate load test of a large diameter bored pile performed at the location of Alzey highway bridge project (Germany). Test results of pile load settlement relationship till failure as well as results of the base and shaft resistances are available. Apparent failure was indicated in this test by the significant increase of the induced settlement during the last load increment applied on the pile head. Measurements of this pile load test are used to assess the quality of the numerical models investigated. Three different material soil models are implemented in the analyses: Mohr coulomb (MC), Soft soil (SS), and Modified Mohr coulomb (MMC). Very good agreement is obtained between the field measured settlement and the calculated settlement using the MMC model. Results of analysis showed also that the MMC constitutive model is superior to MC, and SS models in predicting the ultimate base and shaft resistances of the large diameter bored pile. After calibrating the numerical model, behavior of large diameter bored piles under axial loads is discussed and the formation of the plastic zone around the pile is explored. Results obtained showed that the plastic zone below the base of the pile at failure extended laterally to about four times the pile diameter and vertically to about three times the pile diameter.Keywords: ultimate capacity, large diameter bored piles, plastic zone, failure, pile load test
Procedia PDF Downloads 142814 Mathematical Modelling, Simulation and Prototype Designing of Potable Water System on Basis of Forward Osmosis
Authors: Ridhish Kumar, Sudeep Nadukkandy, Anirban Roy
Abstract:
The development of reverse osmosis happened in 1960. Along the years this technique has been widely accepted all over the world for varied applications ranging from seawater desalination to municipal water treatment. Forward osmosis (FO) is one of the foremost technologies for low energy consuming solutions for water purification. In this study, we have carried out a detailed analysis on selection, design, and pricing for a prototype of potable water system for purifying water in emergency situations. The portable and light purification system is envisaged to be driven by FO. This pouch will help to serve as an emergency water filtration device. The current effort employs a model to understand the interplay of permeability and area on the rate of purification of water from any impure source/brackish water. The draw solution for the FO pouch is considered to be a combination of salt and sugar such that dilution of the same would result in an oral rehydration solution (ORS) which is a boon for dehydrated patients. However, the effort takes an extra step to actually estimate the cost and pricing of designing such a prototype. While the mathematical model yields the best membrane (compositions are taken from literature) combination in terms of permeability and area, the pricing takes into account the feasibility of such a solution to be made available as a retail item. The product is envisaged to be a market competitor for packaged drinking water and ORS combination (costing around $0.5 combined) and thus, to be feasible has to be priced around the same range with greater margins in order to have a better distribution. Thus a proper business plan and production of the same has been formulated in order to be a feasible solution for unprecedented calamities and emergency situations.Keywords: forward osmosis, water treatment, oral rehydration solution, prototype
Procedia PDF Downloads 182813 Evaluation of Green Infrastructure with Different Woody Plants Practice and Benefit Using the Stormwater Management-HYDRUS Model
Authors: Bei Zhang, Zhaoxin Zhang, Lidong Zhao
Abstract:
Green infrastructures (GIs) for rainwater management can directly meet the multiple purposes of urban greening and non-point source pollution control. To reveal the overall layout law of GIs dominated by typical woody plants and their impact on urban environmental effects, we constructed a HYDRUS-1D and Stormwater management (SWMM) coupling model to simulate the response of typical root woody plant planting methods on urban hydrological. The results showed that the coupling model had high adaptability to the simulation of urban surface runoff control effect under different woody plant planting methods (NSE ≥0.64 and R² ≥ 0.71). The regulation effect on surface runoff showed that the average runoff reduction rate of GIs increased from 60 % to 71 % with the increase of planting area (5% to 25%) under the design rainfall event of the 2-year recurrence interval. Sophora japonica with tap roots was slightly higher than that of without plants (control) and Malus baccata (M. baccata) with fibrous roots. The comprehensive benefit evaluation system of rainwater utilization technology was constructed by using an analytic hierarchy process. The coupling model was used to evaluate the comprehensive benefits of woody plants with different planting areas in the study area in terms of environment, economy, and society. The comprehensive benefit value of planting 15% M. baccata was the highest, which was the first choice for the planting of woody plants in the study area. This study can provide a scientific basis for the decision-making of green facility layouts of woody plants.Keywords: green infrastructure, comprehensive benefits, runoff regulation, woody plant layout, coupling model
Procedia PDF Downloads 68812 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 333811 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model
Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu
Abstract:
The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model
Procedia PDF Downloads 317810 Designing an Exhaust Gas Energy Recovery Module Following Measurements Performed under Real Operating Conditions
Authors: Jerzy Merkisz, Pawel Fuc, Piotr Lijewski, Andrzej Ziolkowski, Pawel Czarkowski
Abstract:
The paper presents preliminary results of the development of an automotive exhaust gas energy recovery module. The aim of the performed analyses was to select the geometry of the heat exchanger that would ensure the highest possible transfer of heat at minimum heat flow losses. The starting point for the analyses was a straight portion of a pipe, from which the exhaust system of the tested vehicle was made. The design of the heat exchanger had a cylindrical cross-section, was 300 mm long and was fitted with a diffuser and a confusor. The model works were performed for the mentioned geometry utilizing the finite volume method based on the Ansys CFX v12.1 and v14 software. This method consisted in dividing of the system into small control volumes for which the exhaust gas velocity and pressure calculations were performed using the Navier-Stockes equations. The heat exchange in the system was modeled based on the enthalpy balance. The temperature growth resulting from the acting viscosity was not taken into account. The heat transfer on the fluid/solid boundary in the wall layer with the turbulent flow was done based on an arbitrarily adopted dimensionless temperature. The boundary conditions adopted in the analyses included the convective condition of heat transfer on the outer surface of the heat exchanger and the mass flow and temperature of the exhaust gas at the inlet. The mass flow and temperature of the exhaust gas were assumed based on the measurements performed in actual traffic using portable PEMS analyzers. The research object was a passenger vehicle fitted with a 1.9 dm3 85 kW diesel engine. The tests were performed in city traffic conditions.Keywords: waste heat recovery, heat exchanger, CFD simulation, pems
Procedia PDF Downloads 573809 An Industrial Steady State Sequence Disorder Model for Flow Controlled Multi-Input Single-Output Queues in Manufacturing Systems
Authors: Anthony John Walker, Glen Bright
Abstract:
The challenge faced by manufactures, when producing custom products, is that each product needs exact components. This can cause work-in-process instability due to component matching constraints imposed on assembly cells. Clearing type flow control policies have been used extensively in mediating server access between multiple arrival processes. Although the stability and performance of clearing policies has been well formulated and studied in the literature, the growth in arrival to departure sequence disorder for each arriving job, across a serving resource, is still an area for further analysis. In this paper, a closed form industrial model has been formulated that characterizes arrival-to-departure sequence disorder through stable manufacturing systems under clearing type flow control policy. Specifically addressed are the effects of sequence disorder imposed on a downstream assembly cell in terms of work-in-process instability induced through component matching constraints. Results from a simulated manufacturing system show that steady state average sequence disorder in parallel upstream processing cells can be balanced in order to decrease downstream assembly system instability. Simulation results also show that the closed form model accurately describes the growth and limiting behavior of average sequence disorder between parts arriving and departing from a manufacturing system flow controlled via clearing policy.Keywords: assembly system constraint, custom products, discrete sequence disorder, flow control
Procedia PDF Downloads 177