Search results for: space velocity
665 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation
Authors: Kausar Harun, Ahmad Azmin Mohamad
Abstract:
Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles
Procedia PDF Downloads 308664 Emily Dickinson's Green Aesthetics: Mode Gakuen Cocoon Tower as the Anthropomorphic Architectural Representation in the Age of Anthropocene
Authors: Chia-Wen Kuo
Abstract:
Jesse Curran states that there is a "breath awareness" that "facilitates a present-minded capability" to catalyse an "epistemological rupture" in Emily Dickinson's poetry, particularly in the age of Anthropocene. In Dickinson's "Nature", non-humans are subjectified as nature ceases to be subordinated to human interests, and Dickinson's Eco-humility has driven us, readers, into mimicking nature for the making of a better world. In terms of sustainable architecture, Norman Foster is among the representatives who utilise BIM to reduce architectural waste while satiating the users' aesthetic craving for a spectacular skyline. Notably, the Gherkin - 30 St. Mary Axe in east-end London. In 2019, Foster and his team aspired to savour the London skyline with his new design - the Tulip, which has been certified by the LEED as a legitimate green building as well as a complementary extension of the Gherkin. However, Foster's proposition had been denied for numerous times by the mayor Sadiq Khan and the city council as the Tulip cannot blend in the public space around while its observatory functions like a surveillance platform. The Tulip, except for its aesthetic idiosyncrasy, fails to serve for the public good other than another ostentatious tourist attraction in London. The architectural team for Mode Gakuen Cocoon tower, completed in 2008, intended to honour Nature with the symbolism in the building's aesthetic design. It serves as an architectural cocoon that nurtures the students of "Special Technology and Design College" inside. The building itself turns into a Dickinsonian anthropomorphism, where humans are made humble to learn from the entomological beings for self-betterment in the age of Anthropocene. Despite bearing resemblance to a tulip as well as its LEED credential, Norman Foster’s Tulip merely pays tribute to the Nature in a relatively superficial manner without constructing an apparatus that substantially benefit the Londoners as all green cities should embrace Emily Dickinson’s “breath awareness” and be built and treated as an extensive as well as expansive form of biomimicry.Keywords: green city, sustianable architecture, London, Tokyo
Procedia PDF Downloads 153663 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique
Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram
Abstract:
Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm
Procedia PDF Downloads 169662 Influence of Water Physicochemical Properties and Vegetation Type on the Distribution of Schistosomiasis Intermediate Host Snails in Nelson Mandela Bay
Authors: Prince S. Campbell, Janine B. Adams, Melusi Thwala, Opeoluwa Oyedele, Paula E. Melariri
Abstract:
Schistosomiasis is an infectious water-borne disease that holds substantial medical and veterinary importance and is transmitted by Schistosoma flatworms. The transmission and spread of the disease are geographically and temporally confined to water bodies (rivers, lakes, lagoons, dams, etc.) inhabited by its obligate intermediate host snails and human water contact. Human infection with the parasite occurs via skin penetration subsequent to exposure to water infested with schistosome cercariae. Environmental factors play a crucial role in the spread of the disease, as the survival of intermediate host snails is dependent on favourable conditions. These factors include physical and chemical components of water, including pH, salinity, temperature, electrical conductivity, dissolved oxygen, turbidity, water hardness, total dissolved solids, and velocity, as well as biological factors such as predator-prey interactions, competition, food availability, and the presence and density of aquatic vegetation. This study evaluated the physicochemical properties of the water bodies, vegetation type, distribution, and habitat presence of the snail intermediate host. A quantitative cross-sectional research design approach was employed in this study. Eight sampling sites were selected based on their proximity to residential areas. Snails and water physicochemical properties were collected over different seasons for 9 months. A simple dip method was used for surface water samples and measurements were done using multiparameter meters. Snails captured using a 300 µm mesh scoop net and predominant plant species were gathered and transported to experts for identification. Vegetation composition and cover were visually estimated and recorded at each sampling point. Data was analysed using R software (version 4.3.1). A total of 844 freshwater snails were collected, with Physa genera accounting for 95.9% of the snails. Bulinus and Biomphalaria snails, which serve as intermediate hosts for the disease, accounted for (0.9%) and (0.6%) respectively. Indicator macrophytes such as Eicchornia crassipes, Stuckenia pectinate, Typha capensis, and floating macroalgae were found in several water bodies. A negative and weak correlation existed between the number of snails and physicochemical properties such as electrical conductivity (r=-0.240), dissolved oxygen (r=-0.185), hardness (r=-0.210), pH (r=-0.235), salinity (r=-0.242), temperature (r=-0.273), and total dissolved solids (r=-0.236). There was no correlation between the number of snails and turbidity (r=-0.070). Moreover, there was a negative and weak correlation between snails and vegetation coverage (r=-0.127). Findings indicated that snail abundance marginally declined with rising physicochemical concentrations, and the majority of snails were located in regions with less vegetation cover. The reduction in Bulinus and Biomphalaria snail populations may also be attributed to other factors, such as competition among the snails. Snails of the Physa genus were abundant due to their noteworthy resilience in difficult environments. These snails have the potential to function as biological control agents in areas where the disease is endemic, as they outcompete other snails, including schistosomiasis intermediate host snails.Keywords: intermediate host snails, physicochemical properties, schistosomiasis, vegetation type
Procedia PDF Downloads 20661 Didactic Suitability and Mathematics Through Robotics and 3D Printing
Authors: Blanco T. F., Fernández-López A.
Abstract:
Nowadays, education, motivated by the new demands of the 21st century, acquires a dimension that converts the skills that new generations may need into a huge and uncertain set of knowledge too broad to be entirety covered. Within this set, and as tools to reach them, we find Learning and Knowledge Technologies (LKT). Thus, in order to prepare students for an everchanging society in which the technological boom involves everything, it is essential to develop digital competence. Nevertheless LKT seems not to have found their place in the educational system. This work is aimed to go a step further in the research of the most appropriate procedures and resources for technological integration in the classroom. The main objective of this exploratory study is to analyze the didactic suitability (epistemic, cognitive, affective, interactional, mediational and ecological) for teaching and learning processes of mathematics with robotics and 3D printing. The analysis carried out is drawn from a STEAM (Science, Technology, Engineering, Art and Mathematics) project that has the Pilgrimage way to Santiago de Compostela as a common thread. The sample is made up of 25 Primary Education students (10 and 11 years old). A qualitative design research methodology has been followed, the sessions have been distributed according to the type of technology applied. Robotics has been focused towards learning two-dimensional mathematical notions while 3D design and printing have been oriented towards three-dimensional concepts. The data collection instruments used are evaluation rubrics, recordings, field notebooks and participant observation. Indicators of didactic suitability proposed by Godino (2013) have been used for the analysis of the data. In general, the results show a medium-high level of didactic suitability. Above these, a high mediational and cognitive suitability stands out, which led to a better understanding of the positions and relationships of three-dimensional bodies in space and the concept of angle. With regard to the other indicators of the didactic suitability, it should be noted that the interactional suitability would require more attention and the affective suitability a deeper study. In conclusion, the research has revealed great expectations around the combination of teaching-learning processes of mathematics and LKT. Although there is still a long way to go in terms of the provision of means and teacher training.Keywords: 3D printing, didactic suitability, educational design, robotics
Procedia PDF Downloads 102660 Exploration of the Protection Theory of Chinese Scenic Heritage Based on Local Chronicles
Authors: Mao Huasong, Tang Siqi, Cheng Yu
Abstract:
The cognition and practice of Chinese landscapes have distinct uniqueness. The intergenerational inheritance of urban and rural landscapes is a common objective fact which has created a unique type of heritage in China - scenic heritage. The current generalization of the concept of scenic heritage has affected the lack of innovation in corresponding protection practices. Therefore, clarifying the concepts and connotations of scenery and scenic heritage, clarifying the protection objects of scenic heritage and the methods and approaches in intergenerational inheritance can provide theoretical support for the practice of Chinese scenic heritage and contribute Chinese wisdom to the transformation of world heritage sites. Taking ancient Shaoxing, which has a long time span and rich descriptions of scenic types and quantities, as the research object and using local chronicles as the basic research material, based on text analysis, word frequency analysis, case statistics, and historical, geographical spatial annotation methods, this study traces back to ancient scenic practices and conducts in-depth descriptions in both text and space. it have constructed a scenic heritage identification method based on the basic connotation characteristics and morphological representation characteristics of natural and cultural correlations, combined with the intergenerational and representative characteristics of scenic heritage; Summarized the bidirectional integration of "scenic spots" and "form scenic spots", "outstanding people" and "local spirits" in the formation process of scenic heritage; In inheritance, guided by Confucian values of education; In communication, the cultural interpretation constructed by scenery and the way of landscape life are used to strengthen the intergenerational inheritance of natural, artificial material elements, and intangible spirits. As a unique type of heritage in China, scenic heritage should improve its standards, values, and connotations in current protection practices and actively absorb historical experience.Keywords: scenic heritage, heritage protection, cultural landscape, shaoxing, chinese landscape
Procedia PDF Downloads 67659 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions
Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra
Abstract:
In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.Keywords: aerosol, CFD, deposition, coagulation
Procedia PDF Downloads 142658 Muslims in Diaspora Negotiating Islam through Muslim Public Sphere and the Role of Media
Authors: Sabah Khan
Abstract:
The idea of universal Islam tends to exaggerate the extent of homogeneity in Islamic beliefs and practices across Muslim communities. In the age of migration, various Muslim communities are in diaspora. The immediate implication of this is what happens to Islam in diaspora? How Islam gets represented in new forms? Such pertinent questions need to be dealt with. This paper shall draw on the idea of religious transnationalism, primarily transnational Islam. There are multiple ways to conceptualize transnational phenomenon with reference to Islam in terms of flow of people, transnational organizations and networks; Ummah oriented solidarity and the new Muslim public sphere. This paper specifically deals with the new Muslim public sphere. It primarily refers to the space and networks enabled by new media and communication technologies, whereby Muslim identity and Islamic normativity are rehearsed, debated by people in different locales. A new sense of public is emerging across Muslim communities, which needs to be contextualized. This paper uses both primary and secondary data. Primary data elicited through content analysis of audio-visuals on social media and secondary sources of information ranging from books, articles, journals, etc. The basic aim of the paper is to focus on the emerging Muslim public sphere and the role of media in expanding public spheres of Islam. It also explores how Muslims in diaspora negotiate Islam and Islamic practices through media and the new Muslim public sphere. This paper cogently weaves in discussions firstly, of re-intellectualization of Islamic discourse in the public sphere. In other words, how Muslims have come to reimagine their collective identity and critically look at fundamental principles and authoritative tradition. Secondly, the emerging alternative forms of Islam by young Muslims in diaspora. In other words, how young Muslims search for unorthodox ways and media for religious articulation, including music, clothing and TV. This includes transmission and distribution of Islam in diaspora in terms of emerging ‘media Islam’ or ‘soundbite Islam’. The new Muslim public sphere has offered an arena to a large number of participants to critically engage with Islam, which leads not only to a critical engagement with traditional forms of Islamic authority but also emerging alternative forms of Islam and Islamic practices.Keywords: Islam, media, Muslims, public sphere
Procedia PDF Downloads 269657 Evaluation of Public Library Adult Programs: Use of Servqual and Nippa Assessment Standards
Authors: Anna Ching-Yu Wong
Abstract:
This study aims to identify the quality and effectiveness of the adult programs provided by the public library using the ServQUAL Method and the National Library Public Programs Assessment guidelines (NIPPA, June 2019). ServQUAl covers several variables, namely: tangible, reliability, responsiveness, assurance, and empathy. NIPPA guidelines focus on program characteristics, particularly on the outcomes – the level of satisfaction from program participants. The reached populations were adults who participated in library adult programs at a small-town public library in Kansas. This study was designed as quantitative evaluative research which analyzed the quality and effectiveness of the library adult programs by analyzing the role of each factor based on ServQUAL and the NIPPA's library program assessment guidelines. Data were collected from November 2019 to January 2020 using a questionnaire with a Likert Scale. The data obtained were analyzed in a descriptive quantitative manner. The impact of this research can provide information about the quality and effectiveness of existing programs and can be used as input to develop strategies for developing future adult programs. Overall the result of ServQUAL measurement is in very good quality, but still, areas need improvement and emphasis in each variable: Tangible Variables still need improvement in indicators of the temperature and space of the meeting room. Reliability Variable still needs improvement in the timely delivery of the programs. Responsiveness Variable still needs improvement in terms of the ability of the presenters to convey trust and confidence from participants. Assurance Variables still need improvement in the indicator of knowledge and skills of program presenters. Empathy Variable still needs improvement in terms of the presenters' willingness to provide extra assistance. The result of program outcomes measurement based on NIPPA guidelines is very positive. Over 96% of participants indicated that the programs were informative and fun. They learned new knowledge and new skills and would recommend the programs to their friends and families. They believed that together, the library and participants build stronger and healthier communities.Keywords: ServQual model, ServQual in public libraries, library program assessment, NIPPA library programs assessment
Procedia PDF Downloads 95656 Digital Game Fostering Spatial Abilities for Children with Special Needs
Authors: Pedro Barros, Ana Breda, Eugenio Rocha, M. Isabel Santos
Abstract:
As visual and spatial awareness develops, children apprehension of the concept of direction, (relative) distance and (relative) location materializes. Here we present the educational inclusive digital game ORIESPA, under development by the Thematic Line Geometrix, for children aged between 6 and 10 years old, aiming the improvement of their visual and spatial awareness. Visual-spatial abilities are of crucial importance to succeed in many everyday life tasks. Unavoidable in the technological age we are living in, they are essential in many fields of study as, for instance, mathematics.The game, set on a 2D/3D environment, focusses in tasks/challenges on the following categories (1) static orientation of the subject and object, requiring an understanding of the notions of up–down, left–right, front–back, higher-lower or nearer-farther; (2) interpretation of perspectives of three-dimensional objects, requiring the understanding of 2D and 3D representations of three-dimensional objects; and (3) orientation of the subject in real space, requiring the reading and interpreting of itineraries. In ORIESPA, simpler tasks are based on a quadrangular grid, where the front-back and left-right directions and the rotations of 90º, 180º and 270º play the main requirements. The more complex ones are produced on a cubic grid adding the up and down movements. In the first levels, the game's mechanics regarding the reading and interpreting maps (from point A to point B) is based on map routes, following a given set of instructions. In higher levels, the player must produce a list of instructions taking the game character to the desired destination, avoiding obstacles. Being an inclusive game the user has the possibility to interact through the mouse (point and click with a single button), the keyboard (small set of well recognized keys) or a Kinect device (using simple gesture moves). The character control requires the action on buttons corresponding to movements in 2D and 3D environments. Buttons and instructions are also complemented with text, sound and sign language.Keywords: digital game, inclusion, itinerary, spatial ability
Procedia PDF Downloads 178655 Pull-Out Analysis of Composite Loops Embedded in Steel Reinforced Concrete Retaining Wall Panels
Authors: Pierre van Tonder, Christoff Kruger
Abstract:
Modular concrete elements are used for retaining walls to provide lateral support. Depending on the retaining wall layout, these precast panels may be interlocking and may be tied into the soil backfill via geosynthetic strips. This study investigates the ultimate pull-out load increase, which is possible by adding varied diameter supplementary reinforcement through embedded anchor loops within concrete retaining wall panels. Full-scale panels used in practice have four embedded anchor points. However, only one anchor loop was embedded in the center of the experimental panels. The experimental panels had the same thickness but a smaller footprint (600mm x 600mm x 140mm) area than the full-sized panels to accommodate the space limitations of the laboratory and experimental setup. The experimental panels were also cast without any bending reinforcement as would typically be obtained in the full-scale panels. The exclusion of these reinforcements was purposefully neglected to evaluate the impact of a single bar reinforcement through the center of the anchor loops. The reinforcement bars had of 8 mm, 10 mm, 12 mm, and 12 mm. 30 samples of concrete panels with embedded anchor loops were tested. The panels were supported on the edges and the anchor loops were subjected to an increasing tensile force using an Instron piston. Failures that occurred were loop failures and panel failures and a mixture thereof. There was an increase in ultimate load vs. increasing diameter as expected, but this relationship persisted until the reinforcement diameter exceeded 10 mm. For diameters larger than 10 mm, the ultimate failure load starts to decrease due to the dependency of the reinforcement bond strength to the concrete matrix. Overall, the reinforced panels showed a 14 to 23% increase in the factor of safety. Using anchor loops of 66kN ultimate load together with Y10 steel reinforcement with bent ends had shown the most promising results in reducing concrete panel pull-out failure. The Y10 reinforcement had shown, on average, a 24% increase in ultimate load achieved. Previous research has investigated supplementary reinforcement around the anchor loops. This paper extends this investigation by evaluating supplementary reinforcement placed through the panel anchor loops.Keywords: supplementary reinforcement, anchor loops, retaining panels, reinforced concrete, pull-out failure
Procedia PDF Downloads 194654 Fire Safety Assessment of At-Risk Groups
Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen
Abstract:
Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty
Procedia PDF Downloads 101653 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study
Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier
Abstract:
An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house
Procedia PDF Downloads 413652 Physical Planning Trajectories for Disaster Mitigation and Preparedness in Costal and Seismic Regions: Capital Region of Andhra Pradesh, Vijayawada in India
Authors: Timma Reddy, Srikonda Ramesh
Abstract:
India has been traditionally vulnerable to natural disasters such as Floods, droughts, cyclones, earthquakes and landslides. It has become a recurrent phenomenon as observed in last five decades. The survey indicates that about 60% of the landmass is prone to earthquakes of various intensities; over 40 million hectares is prone to floods; about 8% of the total area is prone to cyclones and 68% of the area is susceptible to drought. Climate change is likely to be perceived through experience of extreme weather events. There is growing societal concern about climate change, given the potential impacts of associated natural hazards such as cyclones, flooding, earthquakes, landslides etc, hence it is essential and crucial to strengthening our settlements to respond to such calamities. So, the research paper focus is to analyze the effective planning strategy/mechanism to integrate disaster mitigation measures in coastal regions in general and Capital Region of Andhra Pradesh in particular. The basic hypothesis is to govern the appropriate special planning considerations would facilitate to have organized way of protective life and properties from natural disasters. And further to integrate the infrastructure planning with conscious direction would provide an effective mitigations measures. It has been planned and analyzed to Vijayawada city with conscious land use planning with reference to space syntax trajectory in accordance to required social infrastructure such as health facilities, institution areas and recreational and other open spaces. It has been identified that the geographically ideal location with reference to the population densities based on GIS tools the properness strategies can be effectively integrated to protect the life and to save the properties by means of reducing the damage/impact of natural disasters in general earth quake/cyclones or floods in particularly.Keywords: modular, trajectories, social infrastructure, evidence based syntax, drills and equipments, GIS, geographical micro zoning, high resolution satellite image
Procedia PDF Downloads 217651 A Post-Occupancy Evaluation of Urban Landscape Greenway– A Case Study of the Taiyuan Greenway in Taichung City
Authors: A. Yu-Chen Chien, B. Ying-Ju Su
Abstract:
Greenway is a type of linear park which links the planar parklands and connects the open spaces. In the urban environment, except for providing open spaces with recreational function as well as effectively improve the appearance of the surrounding environment, greenway and parkland also creates benefits to the social and psychological aspects of human. In 2014, the statistics of The Ministry of Home Affairs show that citizens in Taichung enjoy the green area at an average of 4.27 square kilometers per person. How to use the existing green space system effectively and enhance the quality of leisure life thus become the major issues today. The study here points out that greenway and parkland and other open spaces are closely related to the daily life of urban residents. Whether the operation could be executed in accordance with the design is our major concern. To explore the issue, we implemented the Post-Occupancy Evaluation of Taiyuan Greenway in Taichung City. In 1956, Taichung city carried out the urban plan according to Howard’s concept about “Garden City” and built the Taiyuan greenway to restrain the urban expansion. 50-year past, due to the population growth and new demands, the government started to reconstruct the program. It is a three stage modification project of “The Townspace Renaissance project in Taiwan” since 2009, of which the greenway construction is the main point. In this research, we mainly focus on the third stage of this program to investigate the user’s preference and degree of satisfaction based on the Post-Occupancy Evaluation about the finished, unfinished, and undergoing construction sectors as well as facilities. We collected and analyzed the data based on the questionnaires and explored the possible facts that might have affected the degree of satisfaction about the greenway modification project based on the chi-square test. We hope to inspect the purpose of the demonstration projects and provide reference to the Taichung government for the modification planning and the greenway design in the future.Keywords: greenway, landscape greenway, post-occupancy evaluation, Taichung city
Procedia PDF Downloads 327650 Harnessing Environmental DNA to Assess the Environmental Sustainability of Commercial Shellfish Aquaculture in the Pacific Northwest United States
Authors: James Kralj
Abstract:
Commercial shellfish aquaculture makes significant contributions to the economy and culture of the Pacific Northwest United States. The industry faces intense pressure to minimize environmental impacts as a result of Federal policies like the Magnuson-Stevens Fisheries Conservation and Management Act and the Endangered Species Act. These policies demand the protection of essential fish habitat and declare several salmon species as endangered. Consequently, numerous projects related to the protection and rehabilitation of eelgrass beds, a crucial ecosystem for countless fish species, have been proposed at both state and federal levels. Both eelgrass beds and commercial shellfish farms occupy the same physical space, and therefore understanding the effects of shellfish aquaculture on eelgrass ecosystems has become a top ecological and economic priority of both government and industry. This study evaluates the organismal communities that eelgrass and oyster aquaculture habitats support. Water samples were collected from Willapa Bay, Washington; Tillamook Bay, Oregon; Humboldt Bay, California; and Sammish Bay, Washington to compare species diversity in eelgrass beds, oyster aquaculture plots, and boundary edges between these two habitats. Diversity was assessed using a novel technique: environmental DNA (eDNA). All organisms constantly shed small pieces of DNA into their surrounding environment through the loss of skin, hair, tissues, and waste. In the marine environment, this DNA becomes suspended in the water column allowing it to be easily collected. Once extracted and sequenced, this eDNA can be used to paint a picture of all the organisms that live in a particular habitat making it a powerful technology for environmental monitoring. Industry professionals and government officials should consider these findings to better inform future policies regulating eelgrass beds and oyster aquaculture. Furthermore, the information collected in this study may be used to improve the environmental sustainability of commercial shellfish aquaculture while simultaneously enhancing its growth and profitability in the face of ever-changing political and ecological landscapes.Keywords: aquaculture, environmental DNA, shellfish, sustainability
Procedia PDF Downloads 245649 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 322648 Exponential Stabilization of a Flexible Structure via a Delayed Boundary Control
Authors: N. Smaoui, B. Chentouf
Abstract:
The boundary stabilization problem of the rotating disk-beam system is a topic of interest in research studies. This system involves a flexible beam attached to the center of a disk, and the control and stabilization of this system have been extensively studied. This research focuses on the case where the center of mass is fixed in an inertial frame, and the rotation of the center is non-uniform. The system is represented by a set of nonlinear coupled partial differential equations and ordinary differential equations. The boundary stabilization problem of this system via a delayed boundary control is considered. We assume that the boundary control is either of a force type control or a moment type control and is subject to the presence of a constant time-delay. The aim of this research is threefold: First, we demonstrate that the rotating disk-beam system is well-posed in an appropriate functional space. Then, we establish the exponential stability property of the system. Finally, we provide numerical simulations that illustrate the theoretical findings. The research utilizes the semigroup theory to establish the well-posedness of the system. The resolvent method is then employed to prove the exponential stability property. Finally, the finite element method is used to demonstrate the theoretical results through numerical simulations. The research findings indicate that the rotating disk-beam system can be stabilized using a boundary control with a time delay. The proof of stability is based on the resolvent method and a variation of constants formula. The numerical simulations further illustrate the theoretical results. The findings have potential implications for the design and implementation of control strategies in similar systems. In conclusion, this research demonstrates that the rotating disk-beam system can be stabilized using a boundary control with time delay. The well-posedness and exponential stability properties are established through theoretical analysis, and these findings are further supported by numerical simulations. The research contributes to the understanding and practical application of control strategies for flexible structures, providing insights into the stability of rotating disk-beam systems.Keywords: rotating disk-beam, delayed force control, delayed moment control, torque control, exponential stability
Procedia PDF Downloads 75647 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion
Authors: Radim Sip, Denisa Denglerova
Abstract:
It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion
Procedia PDF Downloads 146646 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 262645 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 144644 Regional Low Gravity Anomalies Influencing High Concentrations of Heavy Minerals on Placer Deposits
Authors: T. B. Karu Jayasundara
Abstract:
Regions of low gravity and gravity anomalies both influence heavy mineral concentrations on placer deposits. Economically imported heavy minerals are likely to have higher levels of deposition in low gravity regions of placer deposits. This can be found in coastal regions of Southern Asia, particularly in Sri Lanka and Peninsula India and areas located in the lowest gravity region of the world. The area about 70 kilometers of the east coast of Sri Lanka is covered by a high percentage of ilmenite deposits, and the southwest coast of the island consists of Monazite placer deposit. These deposits are one of the largest placer deposits in the world. In India, the heavy mineral industry has a good market. On the other hand, based on the coastal placer deposits recorded, the high gravity region located around Papua New Guinea, has no such heavy mineral deposits. In low gravity regions, with the help of other depositional environmental factors, the grains have more time and space to float in the sea, this helps bring high concentrations of heavy mineral deposits to the coast. The effect of low and high gravity can be demonstrated by using heavy mineral separation devices. The Wilfley heavy mineral separating table is one of these; it is extensively used in industries and in laboratories for heavy mineral separation. The horizontally oscillating Wilfley table helps to separate heavy and light mineral grains in to deferent fractions, with the use of water. In this experiment, the low and high angle of the Wilfley table are representing low and high gravity respectively. A sample mixture of grain size <0.85 mm of heavy and light mineral grains has been used for this experiment. The high and low angle of the table was 60 and 20 respectively for this experiment. The separated fractions from the table are again separated into heavy and light minerals, with the use of heavy liquid, which consists of a specific gravity of 2.85. The fractions of separated heavy and light minerals have been used for drawing the two-dimensional graphs. The graphs show that the low gravity stage has a high percentage of heavy minerals collected in the upper area of the table than in the high gravity stage. The results of the experiment can be used for the comparison of regional low gravity and high gravity levels of heavy minerals. If there are any heavy mineral deposits in the high gravity regions, these deposits will take place far away from the coast, within the continental shelf.Keywords: anomaly, gravity, influence, mineral
Procedia PDF Downloads 196643 Interruption Overload in an Office Environment: Hungarian Survey Focusing on the Factors that Affect Job Satisfaction and Work Efficiency
Authors: Fruzsina Pataki-Bittó, Edit Németh
Abstract:
On the one hand, new technologies and communication tools improve employee productivity and accelerate information and knowledge transfer, while on the other hand, information overload and continuous interruptions make it even harder to concentrate at work. It is a great challenge for companies to find the right balance, while there is also an ongoing demand to recruit and retain the talented employees who are able to adopt the modern work style and effectively use modern communication tools. For this reason, this research does not focus on the objective measures of office interruptions, but aims to find those disruption factors which influence the comfort and job satisfaction of employees, and the way how they feel generally at work. The focus of this research is on how employees feel about the different types of interruptions, which are those they themselves identify as hindering factors, and those they feel as stress factors. By identifying and then reducing these destructive factors, job satisfaction can reach a higher level and employee turnover can be reduced. During the research, we collected information from depth interviews and questionnaires asking about work environment, communication channels used in the workplace, individual communication preferences, factors considered as disruptions, and individual steps taken to avoid interruptions. The questionnaire was completed by 141 office workers from several types of workplaces based in Hungary. Even though 66 respondents are working at Hungarian offices of multinational companies, the research is about the characteristics of the Hungarian labor force. The most important result of the research shows that while more than one third of the respondents consider office noise as a disturbing factor, personal inquiries are welcome and considered useful, even if in such cases the work environment will not be convenient to solve tasks requiring concentration. Analyzing the sizes of the offices, in an open-space environment, the rate of those who consider office noise as a disturbing factor is surprisingly lower than in smaller office rooms. Opinions are more diverse regarding information communication technologies. In addition to the interruption factors affecting the employees' job satisfaction, the research also focuses on the role of the offices in the 21st century.Keywords: information overload, interruption, job satisfaction, office environment, work efficiency
Procedia PDF Downloads 226642 Satellite Photogrammetry for DEM Generation Using Stereo Pair and Automatic Extraction of Terrain Parameters
Authors: Tridipa Biswas, Kamal Pandey
Abstract:
A Digital Elevation Model (DEM) is a simple representation of a surface in 3 dimensional space with elevation as the third dimension along with X (horizontal coordinates) and Y (vertical coordinates) in rectangular coordinates. DEM has wide applications in various fields like disaster management, hydrology and watershed management, geomorphology, urban development, map creation and resource management etc. Cartosat-1 or IRS P5 (Indian Remote Sensing Satellite) is a state-of-the-art remote sensing satellite built by ISRO (May 5, 2005) which is mainly intended for cartographic applications.Cartosat-1 is equipped with two panchromatic cameras capable of simultaneous acquiring images of 2.5 meters spatial resolution. One camera is looking at +26 degrees forward while another looks at –5 degrees backward to acquire stereoscopic imagery with base to height ratio of 0.62. The time difference between acquiring of the stereopair images is approximately 52 seconds. The high resolution stereo data have great potential to produce high-quality DEM. The high-resolution Cartosat-1 stereo image data is expected to have significant impact in topographic mapping and watershed applications. The objective of the present study is to generate high-resolution DEM, quality evaluation in different elevation strata, generation of ortho-rectified image and associated accuracy assessment from CARTOSAT-1 data based Ground Control Points (GCPs) for Aglar watershed (Tehri-Garhwal and Dehradun district, Uttarakhand, India). The present study reveals that generated DEMs (10m and 30m) derived from the CARTOSAT-1 stereo pair is much better and accurate when compared with existing DEMs (ASTER and CARTO DEM) also for different terrain parameters like slope, aspect, drainage, watershed boundaries etc., which are derived from the generated DEMs, have better accuracy and results when compared with the other two (ASTER and CARTO) DEMs derived terrain parameters.Keywords: ASTER-DEM, CARTO-DEM, CARTOSAT-1, digital elevation model (DEM), ortho-rectified image, photogrammetry, RPC, stereo pair, terrain parameters
Procedia PDF Downloads 306641 Mathematical Modelling of Spatial Distribution of Covid-19 Outbreak Using Diffusion Equation
Authors: Kayode Oshinubi, Brice Kammegne, Jacques Demongeot
Abstract:
The use of mathematical tools like Partial Differential Equations and Ordinary Differential Equations have become very important to predict the evolution of a viral disease in a population in order to take preventive and curative measures. In December 2019, a novel variety of Coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province, China causing a severe and potentially fatal respiratory syndrome, i.e., COVID-19. Since then, it has become a pandemic declared by World Health Organization (WHO) on March 11, 2020 which has spread around the globe. A reaction-diffusion system is a mathematical model that describes the evolution of a phenomenon subjected to two processes: a reaction process in which different substances are transformed, and a diffusion process that causes a distribution in space. This article provides a mathematical study of the Susceptible, Exposed, Infected, Recovered, and Vaccinated population model of the COVID-19 pandemic by the bias of reaction-diffusion equations. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined using the Lyapunov function are considered and the endemic equilibrium point exists and is stable if it satisfies Routh–Hurwitz criteria. Also, adequate conditions for the existence and uniqueness of the solution of the model have been proved. We showed the spatial distribution of the model compartments when the basic reproduction rate $\mathcal{R}_0 < 1$ and $\mathcal{R}_0 > 1$ and sensitivity analysis is performed in order to determine the most sensitive parameters in the proposed model. We demonstrate the model's effectiveness by performing numerical simulations. We investigate the impact of vaccination and the significance of spatial distribution parameters in the spread of COVID-19. The findings indicate that reducing contact with an infected person and increasing the proportion of susceptible people who receive high-efficacy vaccination will lessen the burden of COVID-19 in the population. To the public health policymakers, we offered a better understanding of the COVID-19 management.Keywords: COVID-19, SEIRV epidemic model, reaction-diffusion equation, basic reproduction number, vaccination, spatial distribution
Procedia PDF Downloads 122640 Quantitative Analysis of Three Sustainability Pillars for Water Tradeoff Projects in Amazon
Authors: Taha Anjamrooz, Sareh Rajabi, Hasan Mahmmud, Ghassan Abulebdeh
Abstract:
Water availability, as well as water demand, are not uniformly distributed in time and space. Numerous extra-large water diversion projects are launched in Amazon to alleviate water scarcities. This research utilizes statistical analysis to examine the temporal and spatial features of 40 extra-large water diversion projects in Amazon. Using a network analysis method, the correlation between seven major basins is measured, while the impact analysis method is employed to explore the associated economic, environmental, and social impacts. The study unearths that the development of water diversion in Amazon has witnessed four stages, from a preliminary or initial period to a phase of rapid development. It is observed that the length of water diversion channels and the quantity of water transferred have amplified significantly in the past five decades. As of 2015, in Amazon, more than 75 billion m³ of water was transferred amidst 12,000 km long channels. These projects extend over half of the Amazon Area. The River Basin E is currently the most significant source of transferred water. Through inter-basin water diversions, Amazon gains the opportunity to enhance the Gross Domestic Product (GDP) by 5%. Nevertheless, the construction costs exceed 70 billion US dollars, which is higher than any other country. The average cost of transferred water per unit has amplified with time and scale but reduced from western to eastern Amazon. Additionally, annual total energy consumption for pumping exceeded 40 billion kilowatt-hours, while the associated greenhouse gas emissions are assessed to be 35 million tons. Noteworthy to comprehend that ecological problems initiated by water diversion influence the River Basin B and River Basin D. Due to water diversion, more than 350 thousand individuals have been relocated, away from their homes. In order to enhance water diversion sustainability, four categories of innovative measures are provided for decision-makers: development of water tradeoff projects strategies, improvement of integrated water resource management, the formation of water-saving inducements, and pricing approach, and application of ex-post assessment.Keywords: sustainability, water trade-off projects, environment, Amazon
Procedia PDF Downloads 128639 Water Problems, Social Mobilization and Migration: A Case Study of Lake Urmia
Authors: Fatemeh Dehghan Khangahi, Hakan Gunes
Abstract:
Transforming a public necessity into a commercial commodity becomes more and more evident as time goes on, and it is one of the issues of water shortage. Development projects of countries, consume the water and waterbeds in various forms, ignoring the concepts such as sustainability and the negative effects they place on the environment, pollute and change the ways of waterways. Throughout these processes, the water basins and all the vital environments sometimes can suffer damage to the irreparable level. In this context, the issue of Lake Urmia that is located in the North West of Iran left alone by drought, has been researched. The lake, which is on the list of UNESCO's biosphere reserves, is now exposed to the danger of desiccation. If the desiccation is fully realized, more than 5.000.000 people that they are living around the lake, will have to migrate as a result of negative living conditions. As a matter of fact, along with the recent years of increasing drought level, regional migrations have begun. In addition to migration issues, it is also necessary to specify the negative effects on human and all-round’s life that depend on the formation of salt storms, mixing of salt into the air and soil, which threaten human health seriously because the lake is salty. The main aim of this work is to raise national and international awareness of this problem, which is an environment and a human tragedy at the same time. This research has two basic questions: 1) In the case of Lake Urmia, what are environmental problems and how they have emerged and what is the role of governments? 2) What is the social consequence of this problem in relation to the first question? In response, after the literature search, having a comparative view of the situation of the Aral Sea and the Great Salt Lake (Utah, USA), which involved the two major international examples. The first, one is related to the terms of population and migration, the second is about biological properties. Then, data and status information that provided after 3 years area research has been evaluated. Towards the end, with the support of qualitative and quantitative methods, the study of social mobilization in the region has been carried out. An example of it is using the public space of TRAXTOR matches like a protests area.Keywords: environment problems, water, social mobilization, Lake Urmia, migration
Procedia PDF Downloads 132638 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 61637 Materiality of Gender Roles in Gede City State
Authors: David Maina Muthegethi
Abstract:
For decades, archaeological work of Swahili Civilization has mainly concentrated on exploration of economic and political dynamics of City states. This paper moves further and explore how gender roles were formed, maintained, negotiated and re-negotiated through time and space in Gede City. Unlike other Swahili city states, Gede was located around two miles away from the shores of Indian Ocean. Nonetheless, the city was characterized by security walls, stone houses, mosques and tombs typical of Swahili City states such as Kilwa. The study employed several methods of data collection namely: archival research, survey, re-examination of collected materials and excavation of Gede archaeological site. Since, the study aimed to examine gender roles across different social class, a total of three houses were excavated based on their social hierarchy. Thus, the houses were roughly categorized as belonging to elites, middle class and lower class. The house were located in the inner wall, second inner wall and the outer wall of Gede City respectively. Key findings shows that gender roles differed considerably along classes in Gede archaeological site. For instance, the women of the elite and middle class were active participants in Gede international trade through production and consumption of imported goods. This participation corresponded with commercialization of Gede households especially in elite’ areas where they hosted international traders. On the other hand, the middle class houses, women concentrated on running of light industries aimed at supplying goods for the urban community. Thus, they were able to afford exotic goods as their elites counterparts. Lastly, the gender roles of lower class entailed subsistence gender roles with little participation in Gede formal commerce. Interestingly, gender roles in Gede were dynamic in nature and response to cultural diffusion, spread of Islam, intensification of trade, diversification of subsistence patterns and urbanization. Therefore, this findings, demonstrate centrality of gender in reconstruction of social lives of Swahili Civilization.Keywords: gender roles, Islam, Swahili civilization, urbanization
Procedia PDF Downloads 92636 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League
Authors: E. Gouya Bozorg
Abstract:
The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.Keywords: efficiency of a team, ranking, special table, theoretical mathematic
Procedia PDF Downloads 417