Search results for: industry process engineering
134 Upward Spread Forced Smoldering Phenomenon: Effects and Applications
Authors: Akshita Swaminathan, Vinayak Malhotra
Abstract:
Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering
Procedia PDF Downloads 141133 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142132 Effect of Fresh Concrete Curing Methods on Its Compressive Strength
Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang
Abstract:
Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison
Procedia PDF Downloads 291131 Application of Multiwall Carbon Nanotubes with Anionic Surfactant to Cement Paste
Authors: Maciej Szelag
Abstract:
The discovery of the carbon nanotubes (CNT), has led to a breakthrough in the material engineering. The CNT is characterized by very large surface area, very high Young's modulus (about 2 TPa), unmatched durability, high tensile strength (about 50 GPa) and bending strength. Their diameter usually oscillates in the range from 1 to 100 nm, and the length from 10 nm to 10-2 m. The relatively new approach is the CNT’s application in the concrete technology. The biggest problem in the use of the CNT to cement composites is their uneven dispersion and low adhesion to the cement paste. Putting the nanotubes alone into the cement matrix does not produce any effect because they tend to agglomerate, due to their large surface area. Most often, the CNT is used as an aqueous suspension in the presence of a surfactant that has previously been sonicated. The paper presents the results of investigations of the basic physical properties (apparent density, shrinkage) and mechanical properties (compression and tensile strength) of cement paste with the addition of the multiwall carbon nanotubes (MWCNT). The studies were carried out on four series of specimens (made of two different Portland Cement). Within each series, samples were made with three w/c ratios – 0.4, 0.5, 0.6 (water/cement). Two series were an unmodified cement matrix. In the remaining two series, the MWCNT was added in amount of 0.1% by cement’s weight. The MWCNT was used as an aqueous dispersion in the presence of a surfactant – SDS – sodium dodecyl sulfate (C₁₂H₂₅OSO₂ONa). So prepared aqueous solution was sonicated for 30 minutes. Then the MWCNT aqueous dispersion and cement were mixed using a mechanical stirrer. The parameters were tested after 28 days of maturation. Additionally, the change of these parameters was determined after samples temperature loading at 250°C for 4 hours (thermal shock). Measurement of the apparent density indicated that cement paste with the MWCNT addition was about 30% lighter than conventional cement matrix. This is due to the fact that the use of the MWCNT water dispersion in the presence of surfactant in the form of SDS resulted in the formation of air pores, which were trapped in the volume of the material. SDS as an anionic surfactant exhibits characteristics specific to blowing agents – gaseous and foaming substances. Because of the increased porosity of the cement paste with the MWCNT, they have obtained lower compressive and tensile strengths compared to the cement paste without additive. It has been observed, however, that the smallest decreases in the compressive and tensile strength after exposure to the elevated temperature achieved samples with the MWCNT. The MWCNT (well dispersed in the cement matrix) can form bridges between hydrates in a nanoscale of the material’s structure. Thus, this may result in an increase in the coherent cohesion of the cement material subjected to a thermal shock. The obtained material could be used for the production of an aerated concrete or using lightweight aggregates for the production of a lightweight concrete.Keywords: cement paste, elevated temperature, mechanical parameters, multiwall carbon nanotubes, physical parameters, SDS
Procedia PDF Downloads 353130 Conserving Naubad Karez Cultural Landscape – a Multi-Criteria Approach to Urban Planning
Authors: Valliyil Govindankutty
Abstract:
Human civilizations across the globe stand testimony to water being one of the major interaction points with nature. The interactions with nature especially in drier areas revolve around water, be it harnessing, transporting, usage and management. Many ingenious ideas were born, nurtured and developed for harnessing, transporting, storing and distributing water through the areas in the drier parts of the world. Many methods of water extraction, collection and management could be found throughout the world, some of which are associated with efficient, sustained use of surface water, ground water and rain water. Karez is one such ingenious method of collection, transportation, storage and distribution of ground water. Most of the Karez systems in India were developed during reign of Muslim dynasties with ruling class descending from Persia or having influential connections and inviting expert engineers from there. Karez have strongly influenced the village socio-economic organisations due to multitude of uses they were brought into. These are masterpiece engineering structures to collect groundwater and direct it, through a subsurface gallery with a gradual slope, to surface canals that provide water to settlements and agricultural fields. This ingenious technology, karez was result of need for harnessing groundwater in arid areas like that of Bidar. The study views this traditional technology in historical perspective linked to sustainable utilization and management of groundwater and above all the immediate environment. The karez system is one of the best available demonstration of human ingenuity and adaptability to situations and locations of water scarcity. Bidar, capital of erstwhile Bahmani sultanate with a history of more than 700 years or more is one of the heritage cities of present Karnataka State. The unique water systems of Bidar along with other historic entities have been listed under World Heritage Watch List by World Monument Fund. The Historical or cultural landscape in Bidar is very closely associated to the natural resources of the region, Karez systems being one of the best examples. The Karez systems were the lifeline of Bidar’s historical period providing potable water, fulfilling domestic and irrigation needs, both within and outside the fort enclosures. These systems are still functional, but under great pressure and threat of rapid and unplanned urbanisation. The change in land use and fragmentation of land are already paving way for irreversible modification of the karez cultural and geographic landscape. The Paper discusses the significance of character defining elements of Naubad Karez Landscape, highlights the importance of conserving cultural heritage and presents a geographical approach to its revival.Keywords: Karez, groundwater, traditional water harvesting, cultural heritage landscape, urban planning
Procedia PDF Downloads 493129 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 74128 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal
Authors: Linta Rose, Prasad K. Bhaskaran
Abstract:
Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind
Procedia PDF Downloads 217127 Exploring Instructional Designs on the Socio-Scientific Issues-Based Learning Method in Respect to STEM Education for Measuring Reasonable Ethics on Electromagnetic Wave through Science Attitudes toward Physics
Authors: Adisorn Banhan, Toansakul Santiboon, Prasong Saihong
Abstract:
Using the Socio-Scientific Issues-Based Learning Method is to compare of the blended instruction of STEM education with a sample consisted of 84 students in 2 classes at the 11th grade level in Sarakham Pittayakhom School. The 2-instructional models were managed of five instructional lesson plans in the context of electronic wave issue. These research procedures were designed of each instructional method through two groups, the 40-experimental student group was designed for the instructional STEM education (STEMe) and 40-controlling student group was administered with the Socio-Scientific Issues-Based Learning (SSIBL) methods. Associations between students’ learning achievements of each instructional method and their science attitudes of their predictions to their exploring activities toward physics with the STEMe and SSIBL methods were compared. The Measuring Reasonable Ethics Test (MRET) was assessed students’ reasonable ethics with the STEMe and SSIBL instructional design methods on two each group. Using the pretest and posttest technique to monitor and evaluate students’ performances of their reasonable ethics on electromagnetic wave issue in the STEMe and SSIBL instructional classes were examined. Students were observed and gained experience with the phenomena being studied with the Socio-Scientific Issues-Based Learning method Model. To support with the STEM that it was not just teaching about Science, Technology, Engineering, and Mathematics; it is a culture that needs to be cultivated to help create a problem solving, creative, critical thinking workforce for tomorrow in physics. Students’ attitudes were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA). Comparisons between students’ learning achievements of their different instructional methods on the STEMe and SSIBL were analyzed. Associations between students’ performances the STEMe and SSIBL instructional design methods of their reasonable ethics and their science attitudes toward physics were associated. These findings have found that the efficiency of the SSIBL and the STEMe innovations were based on criteria of the IOC value higher than evidence as 80/80 standard level. Statistically significant of students’ learning achievements to their later outcomes on the controlling and experimental groups with the SSIBL and STEMe were differentiated between students’ learning achievements at the .05 level. To compare between students’ reasonable ethics with the SSIBL and STEMe of students’ responses to their instructional activities in the STEMe is higher than the SSIBL instructional methods. Associations between students’ later learning achievements with the SSIBL and STEMe, the predictive efficiency values of the R2 indicate that 67% and 75% for the SSIBL, and indicate that 74% and 81% for the STEMe of the variances were attributable to their developing reasonable ethics and science attitudes toward physics, consequently.Keywords: socio-scientific issues-based learning method, STEM education, science attitudes, measurement, reasonable ethics, physics classes
Procedia PDF Downloads 291126 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233125 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 151124 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 192123 Effect of Internet Addiction on Dietary Behavior and Lifestyle Characteristics among University Students
Authors: Hafsa Kamran, Asma Afreen, Zaheer Ahmed
Abstract:
Internet addiction, an emerging mental health disorder from last two decades, is manifested by the inability in the controlled use of internet leading to academics, social, physiological and/or psychological difficulties. The present study aimed to assess the levels of internet addiction among university students in Lahore and to explore the effects of internet addiction on their dietary behavior and lifestyle. It was an analytical cross-sectional study. Data was collected from October to December 2016 from students of four universities selected through two-stage sampling method. The numbers of participants were 500 and 13 questionnaires were rejected due to incomplete information. Levels of Internet Addiction (IA) were calculated using Young Internet Addiction Test (YIAT). Data was also collected on students’ demographics, lifestyle factors and dietary behavior using self-reported questionnaire. Data was analyzed using SPSS (version 21). Chi-square test was applied to evaluate the relationship between variables. Results of the study revealed that 10% of the population had severe internet addiction while moderate Internet Addiction was present in 42%. High prevalence was found among males (11% vs. 8%), private sector university students (p = 0.008) and engineering students (p = 0.000). The lifestyle habits of internet addicts were significantly of poorer quality than normal users (p = 0.05). Internet addiction was found associated with lesser physically activity (p = 0.025), had shorter duration of physical activity (p = 0.016), had more disorganized sleep pattern (p = 0.023), had less duration of sleep (p = 0.019), reported being more tired and sleepy in class (p = 0.033) and spending more time on internet as compared to normal users. Severe and moderate internet addicts also found to be more overweight and obese than normal users (p = 0.000). The dietary behavior of internet addicts was significantly poorer than normal users. Internet addicts were found to skip breakfast more than a normal user (p = 0.039). Common reasons for meal skipping were lack of time and snacking between meals (p = 0.000). They also had increased meal size (p = 0.05) and habit of snacking while using the internet (p = 0.027). Fast food (p = 0.016) and fried items (p = 0.05) were most consumed snacks, while carbonated beverages (p = 0.019) were most consumed beverages among internet addicts. Internet Addicts were found to consume less than recommended daily servings of dairy (p = 0.008) and fruits (p = 0.000) and more servings of meat group (p = 0.025) than their no internet addict counterparts. In conclusion, in this study, it was demonstrated that internet addicts have unhealthy dietary behavior and inappropriate lifestyle habits. University students should be educated regarding the importance of balanced diet and healthy lifestyle, which are critical for effectual primary prevention of numerous chronic degenerative diseases. Furthermore, it is necessary to raise awareness concerning adverse effects of internet addiction among youth and their parents.Keywords: dietary behavior, internet addiction, lifestyle, university students
Procedia PDF Downloads 200122 Biotechnological Methods for the Grouting of the Tunneling Space
Authors: V. Ivanov, J. Chu, V. Stabnikov
Abstract:
Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space
Procedia PDF Downloads 207121 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 230120 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 487119 Modeling and Analysis Of Occupant Behavior On Heating And Air Conditioning Systems In A Higher Education And Vocational Training Building In A Mediterranean Climate
Authors: Abderrahmane Soufi
Abstract:
The building sector is the largest consumer of energy in France, accounting for 44% of French consumption. To reduce energy consumption and improve energy efficiency, France implemented an energy transition law targeting 40% energy savings by 2030 in the tertiary building sector. Building simulation tools are used to predict the energy performance of buildings but the reliability of these tools is hampered by discrepancies between the real and simulated energy performance of a building. This performance gap lies in the simplified assumptions of certain factors, such as the behavior of occupants on air conditioning and heating, which is considered deterministic when setting a fixed operating schedule and a fixed interior comfort temperature. However, the behavior of occupants on air conditioning and heating is stochastic, diverse, and complex because it can be affected by many factors. Probabilistic models are an alternative to deterministic models. These models are usually derived from statistical data and express occupant behavior by assuming a probabilistic relationship to one or more variables. In the literature, logistic regression has been used to model the behavior of occupants with regard to heating and air conditioning systems by considering univariate logistic models in residential buildings; however, few studies have developed multivariate models for higher education and vocational training buildings in a Mediterranean climate. Therefore, in this study, occupant behavior on heating and air conditioning systems was modeled using logistic regression. Occupant behavior related to the turn-on heating and air conditioning systems was studied through experimental measurements collected over a period of one year (June 2023–June 2024) in three classrooms occupied by several groups of students in engineering schools and professional training. Instrumentation was provided to collect indoor temperature and indoor relative humidity in 10-min intervals. Furthermore, the state of the heating/air conditioning system (off or on) and the set point were determined. The outdoor air temperature, relative humidity, and wind speed were collected as weather data. The number of occupants, age, and sex were also considered. Logistic regression was used for modeling an occupant turning on the heating and air conditioning systems. The results yielded a proposed model that can be used in building simulation tools to predict the energy performance of teaching buildings. Based on the first months (summer and early autumn) of the investigations, the results illustrate that the occupant behavior of the air conditioning systems is affected by the indoor relative humidity and temperature in June, July, and August and by the indoor relative humidity, temperature, and number of occupants in September and October. Occupant behavior was analyzed monthly, and univariate and multivariate models were developed.Keywords: occupant behavior, logistic regression, behavior model, mediterranean climate, air conditioning, heating
Procedia PDF Downloads 57118 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits
Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay
Abstract:
Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I
Procedia PDF Downloads 304117 Assessment of Current and Future Opportunities of Chemical and Biological Surveillance of Wastewater for Human Health
Authors: Adam Gushgari
Abstract:
The SARS-CoV-2 pandemic has catalyzed the rapid adoption of wastewater-based epidemiology (WBE) methodologies both domestically and internationally. To support the rapid scale-up of pandemic-response wastewater surveillance systems, multiple federal agencies (i.e. US CDC), non-government organizations (i.e. Water Environment Federation), and private charities (i.e. Bill and Melinda Gates Foundation) have funded over $220 million USD supporting development and expanding equitable access of surveillance methods. Funds were primarily distributed directly to municipalities under the CARES Act (90.6%), followed by academic projects (7.6%), and initiatives developed by private companies (1.8%). In addition to federal funding for wastewater monitoring primarily conducted at wastewater treatment plants, state/local governments and private companies have leveraged wastewater sampling to obtain health and lifestyle data on student, prison inmate, and employee populations. We explore the viable paths for expansion of the WBE m1ethodology across a variety of analytical methods; the development of WBE-specific samplers and real-time wastewater sensors; and their application to various governments and private sector industries. Considerable investment in, and public acceptance of WBE suggests the methodology will be applied to other future notifiable diseases and health risks. Early research suggests that WBE methods can be applied to a host of additional “biological insults” including communicable diseases and pathogens, such as influenza, Cryptosporidium, Giardia, mycotoxin exposure, hepatitis, dengue, West Nile, Zika, and yellow fever. Interest in chemical insults is also likely, providing community health and lifestyle data on narcotics consumption, use of pharmaceutical and personal care products (PPCP), PFAS and hazardous chemical exposure, and microplastic exposure. Successful application of WBE to monitor analytes correlated with carcinogen exposure, community stress prevalence, and dietary indicators has also been shown. Additionally, technology developments of in situ wastewater sensors, WBE-specific wastewater samplers, and integration of artificial intelligence will drastically change the landscape of WBE through the development of “smart sewer” networks. The rapid expansion of the WBE field is creating significant business opportunities for professionals across the scientific, engineering, and technology industries ultimately focused on community health improvement.Keywords: wastewater surveillance, wastewater-based epidemiology, smart cities, public health, pandemic management, substance abuse
Procedia PDF Downloads 108116 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 283115 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal
Authors: C. M. Sapkota, B. P. Sapkota
Abstract:
Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals
Procedia PDF Downloads 126114 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 98113 Advancing Sustainable Seawater Desalination Technologies: Exploring the Sub-Atmospheric Vapor Pipeline (SAVP) and Energy-Efficient Solution for Urban and Industrial Water Management in Smart, Eco-Friendly, and Green Building Infrastructure
Authors: Mona Shojaei
Abstract:
The Sub-Atmospheric Vapor Pipeline (SAVP) introduces a distinct approach to seawater desalination with promising applications in both land and industrial sectors. SAVP systems exploit the temperature difference between a hot source and a cold environment to facilitate efficient vapor transfer, offering substantial benefits in diverse industrial and field applications. This approach incorporates dynamic boundary conditions, where the temperatures of hot and cold sources vary over time, particularly in natural and industrial environments. Such variations critically influence convection and diffusion processes, introducing challenges that require the refinement of the convection-diffusion equation and the derivation of temperature profiles along the pipeline through advanced engineering mathematics. This study formulates vapor temperature as a function of time and length using two mathematical approaches: Eigen functions and Green’s equation. Combining detailed theoretical modeling, mathematical simulations, and extensive field and industrial tests, this research underscores the SAVP system’s scalability for real-world applications. Results reveal a high degree of accuracy, highlighting SAVP’s significant potential for energy conservation and environmental sustainability. Furthermore, the integration of SAVP technology within smart and green building systems creates new opportunities for sustainable urban water management. By capturing and repurposing vapor for non-potable uses such as irrigation, greywater recycling, and ecosystem support in green spaces, SAVP aligns with the principles of smart and green buildings. Smart buildings emphasize efficient resource management, enhanced system control, and automation for optimal energy and water use, while green buildings prioritize environmental impact reduction and resource conservation. SAVP technology bridges both paradigms, enhancing water self-sufficiency and reducing reliance on external water supplies. The sustainable and energy-efficient properties of SAVP make it a vital component in resilient infrastructure development, addressing urban water scarcity while promoting eco-friendly living. This dual alignment with smart and green building goals positions SAVP as a transformative solution in the pursuit of sustainable urban resource management.Keywords: sub-atmospheric vapor pipeline, seawater desalination, energy efficiency, vapor transfer dynamics, mathematical modeling, sustainable water solutions, smart buildings
Procedia PDF Downloads 10112 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka
Authors: Champika Amarasinghe
Abstract:
Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.Keywords: working environment, Covid 19, occupational diseases, occupational accidents
Procedia PDF Downloads 86111 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study
Authors: Nejoud Alostad, Anup Bora, Prashant Dhote
Abstract:
Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation
Procedia PDF Downloads 196110 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75109 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling
Authors: Ghita Benayad
Abstract:
Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market
Procedia PDF Downloads 44108 Stimulation of Nerve Tissue Differentiation and Development Using Scaffold-Based Cell Culture in Bioreactors
Authors: Simon Grossemy, Peggy P. Y. Chan, Pauline M. Doran
Abstract:
Nerve tissue engineering is the main field of research aimed at finding an alternative to autografts as a treatment for nerve injuries. Scaffolds are used as a support to enhance nerve regeneration. In order to successfully design novel scaffolds and in vitro cell culture systems, a deep understanding of the factors affecting nerve regeneration processes is needed. Physical and biological parameters associated with the culture environment have been identified as potentially influential in nerve cell differentiation, including electrical stimulation, exposure to extracellular-matrix (ECM) proteins, dynamic medium conditions and co-culture with glial cells. The mechanisms involved in driving the cell to differentiation in the presence of these factors are poorly understood; the complexity of each of them raises the possibility that they may strongly influence each other. Some questions that arise in investigating nerve regeneration include: What are the best protein coatings to promote neural cell attachment? Is the scaffold design suitable for providing all the required factors combined? What is the influence of dynamic stimulation on cell viability and differentiation? In order to study these effects, scaffolds adaptable to bioreactor culture conditions were designed to allow electrical stimulation of cells exposed to ECM proteins, all within a dynamic medium environment. Gold coatings were used to make the surface of viscose rayon microfiber scaffolds (VRMS) conductive, and poly-L-lysine (PLL) and laminin (LN) surface coatings were used to mimic the ECM environment and allow the attachment of rat PC12 neural cells. The robustness of the coatings was analyzed by surface resistivity measurements, scanning electron microscope (SEM) observation and immunocytochemistry. Cell attachment to protein coatings of PLL, LN and PLL+LN was studied using DNA quantification with Hoechst. The double coating of PLL+LN was selected based on high levels of PC12 cell attachment and the reported advantages of laminin for neural differentiation. The underlying gold coatings were shown to be biocompatible using cell proliferation and live/dead staining assays. Coatings exhibiting stable properties over time under dynamic fluid conditions were developed; indeed, cell attachment and the conductive power of the scaffolds were maintained over 2 weeks of bioreactor operation. These scaffolds are promising research tools for understanding complex neural cell behavior. They have been used to investigate major factors in the physical culture environment that affect nerve cell viability and differentiation, including electrical stimulation, bioreactor hydrodynamic conditions, and combinations of these parameters. The cell and tissue differentiation response was evaluated using DNA quantification, immunocytochemistry, RT-qPCR and functional analyses.Keywords: bioreactor, electrical stimulation, nerve differentiation, PC12 cells, scaffold
Procedia PDF Downloads 242107 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 289106 Human 3D Metastatic Melanoma Models for in vitro Evaluation of Targeted Therapy Efficiency
Authors: Delphine Morales, Florian Lombart, Agathe Truchot, Pauline Maire, Pascale Vigneron, Antoine Galmiche, Catherine Lok, Muriel Vayssade
Abstract:
Targeted therapy molecules are used as a first-line treatment for metastatic melanoma with B-Raf mutation. Nevertheless, these molecules can cause side effects to patients and are efficient on 50 to 60 % of them. Indeed, melanoma cell sensitivity to targeted therapy molecules is dependent on tumor microenvironment (cell-cell and cell-extracellular matrix interactions). To better unravel factors modulating cell sensitivity to B-Raf inhibitor, we have developed and compared several melanoma models: from metastatic melanoma cells cultured as monolayer (2D) to a co-culture in a 3D dermal equivalent. Cell response was studied in different melanoma cell lines such as SK-MEL-28 (mutant B-Raf (V600E), sensitive to Vemurafenib), SK-MEL-3 (mutant B-Raf (V600E), resistant to Vemurafenib) and a primary culture of dermal human fibroblasts (HDFn). Assays have initially been performed in a monolayer cell culture (2D), then a second time on a 3D dermal equivalent (dermal human fibroblasts embedded in a collagen gel). All cell lines were treated with Vemurafenib (a B-Raf inhibitor) for 48 hours at various concentrations. Cell sensitivity to treatment was assessed under various aspects: Cell proliferation (cell counting, EdU incorporation, MTS assay), MAPK signaling pathway analysis (Western-Blotting), Apoptosis (TUNEL), Cytokine release (IL-6, IL-1α, HGF, TGF-β, TNF-α) upon Vemurafenib treatment (ELISA) and histology for 3D models. In 2D configuration, the inhibitory effect of Vemurafenib on cell proliferation was confirmed on SK-MEL-28 cells (IC50=0.5 µM), and not on the SK-MEL-3 cell line. No apoptotic signal was detected in SK-MEL-28-treated cells, suggesting a cytostatic effect of the Vemurafenib rather than a cytotoxic one. The inhibition of SK-MEL-28 cell proliferation upon treatment was correlated with a strong expression decrease of phosphorylated proteins involved in the MAPK pathway (ERK, MEK, and AKT/PKB). Vemurafenib (from 5 µM to 10 µM) also slowed down HDFn proliferation, whatever cell culture configuration (monolayer or 3D dermal equivalent). SK-MEL-28 cells cultured in the dermal equivalent were still sensitive to high Vemurafenib concentrations. To better characterize all cell population impacts (melanoma cells, dermal fibroblasts) on Vemurafenib efficacy, cytokine release is being studied in 2D and 3D models. We have successfully developed and validated a relevant 3D model, mimicking cutaneous metastatic melanoma and tumor microenvironment. This 3D melanoma model will become more complex by adding a third cell population, keratinocytes, allowing us to characterize the epidermis influence on the melanoma cell sensitivity to Vemurafenib. In the long run, the establishment of more relevant 3D melanoma models with patients’ cells might be useful for personalized therapy development. The authors would like to thank the Picardie region and the European Regional Development Fund (ERDF) 2014/2020 for the funding of this work and Oise committee of "La ligue contre le cancer".Keywords: 3D human skin model, melanoma, tissue engineering, vemurafenib efficiency
Procedia PDF Downloads 302105 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong
Abstract:
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system
Procedia PDF Downloads 190