Search results for: complex and dynamic systems
1893 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 1611892 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 1921891 Evaluation of Biological Seed Coating Technology On-Field Performance of Wheat in Regenerative Agriculture and Conventional Systems
Authors: S. Brain, P. J. Storer, H. Strydom, Z. M. Solaiman
Abstract:
Increasing farmer awareness of soil health, the impact of agricultural management practices, and the requirement for high-quality agricultural produce are major factors driving the rapid adoption of biological seed treatments - currently valued globally at USD 1.5 billion. Biological seed coatings with multistrain plant beneficial microbial technology have the capability to affect plant establishment, growth, and development positively. These beneficial plant microbes can potentially increase soil health, plant yield, and nutrition – acting as bio fertilisers, rhizoremediators, phytostimulators, and stress modulators, and can ultimately reduce the overall use of agrichemicals. A field trial was conducted on MACE wheat in the central wheat belt of Western Australia to evaluate a proprietary seed coating technology (Langleys Bio-EnergeticTM Microbe blend (BMB)) on a conventional program (+/- BMB microbes) and a Regenerative Biomineral fertiliser program (+/- BMB microbes). The Conventional (+BMB) and Biomineral (+BMB) treated plants had no fungicide treatments and had no disease issues. Control (No fertiliser, No microbes), Conventional (No Microbes), and Biomineral (No Microbes) were treated with fungicides (seed dressing and foliar). From the research findings, compared to control and no microbe treatments, both the Conventional (+ BMB) and Biomineral (+ BMB) showed significant increases in Soil Carbon (SOC), Seed germination, nutrient use efficiency (NUE) of nitrogen, phosphate and mineral nutrients, grain mineral nutrient uptake, protein %, hectolitre weight, and fewer screenings, yield, and gross margins.Keywords: biological seed coating, biomineral fertiliser, plant nutrition, regenerative and conventional agriculture
Procedia PDF Downloads 821890 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller
Authors: P. Valsalal, S. Thangalakshmi
Abstract:
There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.Keywords: available line transfer capability, congestion management, FACTS device, Hybrid Fish-Bee Algorithm, ISO, UPFC
Procedia PDF Downloads 3851889 Industrial Rock Characterization using Nuclear Magnetic Resonance (NMR): A Case Study of Ewekoro Quarry
Authors: Olawale Babatunde Olatinsu, Deborah Oluwaseun Olorode
Abstract:
Industrial rocks were collected from a quarry site at Ewekoro in south-western Nigeria and analysed using Nuclear Magnetic Resonance (NMR) technique. NMR measurement was conducted on the samples in partial water-saturated and full brine-saturated conditions. Raw NMR data were analysed with the aid of T2 curves and T2 spectra generated by inversion of raw NMR data using conventional regularized least-squares inversion routine. Results show that NMR transverse relaxation (T2) signatures fairly adequately distinguish between the rock types. Similar T2 curve trend and rates at partial saturation suggests that the relaxation is mainly due to adsorption of water on micropores of similar sizes while T2 curves at full saturation depict relaxation decay rate as: 1/T2(shale)>1/ T2(glauconite)>1/ T2(limestone) and 1/T2(sandstone). NMR T2 distributions at full brine-saturation show: unimodal distribution in shale; bimodal distribution in sandstone and glauconite; and trimodal distribution in limestone. Full saturation T2 distributions revealed the presence of well-developed and more abundant micropores in all the samples with T2 in the range, 402-504 μs. Mesopores with amplitudes much lower than those of micropores are present in limestone, sandstone and glauconite with T2 range: 8.45-26.10 ms, 6.02-10.55 ms, and 9.45-13.26 ms respectively. Very low amplitude macropores of T2 values, 90.26-312.16 ms, are only recognizable in limestone samples. Samples with multiple peaks showed well-connected pore systems with sandstone having the highest degree of connectivity. The difference in T2 curves and distributions for the rocks at full saturation can be utilised as a potent diagnostic tool for discrimination of these rock types found at Ewekoro.Keywords: Ewekoro, NMR techniques, industrial rocks, characterization, relaxation
Procedia PDF Downloads 3021888 Design and Thermal Analysis of Power Harvesting System of a Hexagonal Shaped Small Spacecraft
Authors: Mansa Radhakrishnan, Anwar Ali, Muhammad Rizwan Mughal
Abstract:
Many universities around the world are working on modular and low budget architecture of small spacecraft to reduce the development cost of the overall system. This paper focuses on the design of a modular solar power harvesting system for a hexagonal-shaped small satellite. The designed solar power harvesting systems are composed of solar panels and power converter subsystems. The solar panel is composed of solar cells mounted on the external face of the printed circuit board (PCB), while the electronic components of power conversion are mounted on the interior side of the same PCB. The solar panel with dimensions 16.5cm × 99cm is composed of 36 solar cells (each solar cell is 4cm × 7cm) divided into four parallel banks where each bank consists of 9 solar cells. The output voltage of a single solar cell is 2.14V, and the combined output voltage of 9 series connected solar cells is around 19.3V. The output voltage of the solar panel is boosted to the satellite power distribution bus voltage level (28V) by a boost converter working on a constant voltage maximum power point tracking (MPPT) technique. The solar panel module is an eight-layer PCB having embedded coil in 4 internal layers. This coil is used to control the attitude of the spacecraft, which consumes power to generate a magnetic field and rotate the spacecraft. As power converter and distribution subsystem components are mounted on the PCB internal layer, therefore it is mandatory to do thermal analysis in order to ensure that the overall module temperature is within thermal safety limits. The main focus of the overall design is on compactness, miniaturization, and efficiency enhancement.Keywords: small satellites, power subsystem, efficiency, MPPT
Procedia PDF Downloads 801887 Spatial Interpolation of Intermediate Soil Properties to Enhance Geotechnical Surveying for Foundation Design
Authors: Yelbek B. Utepov, Assel T. Mukhamejanova, Aliya K. Aldungarova, Aida G. Nazarova, Sabit A. Karaulov, Nurgul T. Alibekova, Aigul K. Kozhas, Dias Kazhimkanuly, Akmaral K. Tleubayeva
Abstract:
This research focuses on enhancing geotechnical surveying for foundation design through the spatial interpolation of intermediate soil properties. Traditional geotechnical practices rely on discrete data from borehole drilling, soil sampling, and laboratory analyses, often neglecting the continuous nature of soil properties and disregarding values in intermediate locations. This study challenges these omissions by emphasizing interpolation techniques such as Kriging, Inverse Distance Weighting, and Spline interpolation to capture the nuanced spatial variations in soil properties. The methodology is applied to geotechnical survey data from two construction sites in Astana, Kazakhstan, revealing continuous representations of Young's Modulus, Cohesion, and Friction Angle. The spatial heatmaps generated through interpolation offered valuable insights into the subsurface environment, highlighting heterogeneity and aiding in more informed foundation design decisions for considered cites. Moreover, intriguing patterns of heterogeneity, as well as visual clusters and transitions between soil classes, were explored within seemingly uniform layers. The study bridges the gap between discrete borehole samples and the continuous subsurface, contributing to the evolution of geotechnical engineering practices. The proposed approach, utilizing open-source software geographic information systems, provides a practical tool for visualizing soil characteristics and may pave the way for future advancements in geotechnical surveying and foundation design.Keywords: soil mechanical properties, spatial interpolation, inverse distance weighting, heatmaps
Procedia PDF Downloads 881886 Energy Performance Gaps in Residences: An Analysis of the Variables That Cause Energy Gaps and Their Impact
Authors: Amrutha Kishor
Abstract:
Today, with the rising global warming and depletion of resources every industry is moving toward sustainability and energy efficiency. As part of this movement, it is nowadays obligatory for architects to play their part by creating energy predictions for their designs. But in a lot of cases, these predictions do not reflect the real quantities of energy in newly built buildings when operating. These can be described as ‘Energy Performance Gaps’. This study aims to determine the underlying reasons for these gaps. Seven houses designed by Allan Joyce Architects, UK from 1998 until 2019 were considered for this study. The data from the residents’ energy bills were cross-referenced with the predictions made with the software SefairaPro and from energy reports. Results indicated that the predictions did not match the actual energy usage. An account of how energy was used in these seven houses was made by means of personal interviews. The main factors considered in the study were occupancy patterns, heating systems and usage, lighting profile and usage, and appliances’ profile and usage. The study found that the main reasons for the creation of energy gaps were the discrepancies in occupant usage and patterns of energy consumption that are predicted as opposed to the actual ones. This study is particularly useful for energy-conscious architectural firms to fine-tune the approach to designing houses and analysing their energy performance. As the findings reveal that energy usage in homes varies based on the way residents use the space, it helps deduce the most efficient technological combinations. This information can be used to set guidelines for future policies and regulations related to energy consumption in homes. This study can also be used by the developers of simulation software to understand how architects use their product and drive improvements in its future versions.Keywords: architectural simulation, energy efficient design, energy performance gaps, environmental design
Procedia PDF Downloads 1201885 Furniture Embodied Carbon Calculator for Interior Design Projects
Authors: Javkhlan Nyamjav, Simona Fischer, Lauren Garner, Veronica McCracken
Abstract:
Current whole building life cycle assessments (LCA) primarily focus on structural and major architectural elements to measure building embodied carbon. Most of the interior finishes and fixtures are available on digital tools (such as Tally); however, furniture is still left unaccounted for. Due to its repeated refreshments and its complexity, furniture embodied carbon can accumulate over time, becoming comparable to structure and envelope numbers. This paper presents a method to calculate the Global Warming Potential (GWP) of furniture elements in commercial buildings. The calculator uses the quantity takeoff method with GWP averages gathered from environmental product declarations (EPD). The data was collected from EPD databases and furniture manufacturers from North America to Europe. A total of 48 GWP numbers were collected, with 16 GWP coming from alternative EPD. The finalized calculator shows the average GWP of typical commercial furniture and helps the decision-making process to reduce embodied carbon. The calculator was tested on MSR Design projects and showed furniture can account for more than half of the interior embodied carbon. The calculator highlights the importance of adding furniture to the overall conversation. However, the data collection process showed a) acquiring furniture EPD is not straightforward as other building materials; b) there are very limited furniture EPD, which can be explained from many perspectives, including the EPD price; c) the EPD themselves vary in terms of units, LCA scopes, and timeframes, which makes it hard to compare the products. Even though there are current limitations, the emerging focus on interior embodied carbon will create more demand for furniture EPD. It will allow manufacturers to represent all their efforts on reducing embodied carbon. In addition, the study concludes with recommendations on how designers can reduce furniture-embodied carbon through reuse and closed-loop systems.Keywords: furniture, embodied carbon, calculator, tenant improvement, interior design
Procedia PDF Downloads 2201884 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 3221883 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 1831882 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.Keywords: blueprint, ERP, modular, normalized
Procedia PDF Downloads 1401881 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network
Authors: Z. Abdollahi Biron, P. Pisu
Abstract:
Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon
Procedia PDF Downloads 2401880 Diffuse CO₂ Degassing to Study Blind Geothermal Systems: The Acoculco, Puebla (Mexico) Case Study
Authors: Mirna Guevara, Edgar Santoyo, Daniel Perez-Zarate, Erika Almirudis
Abstract:
The Acoculco caldera located in Puebla (Mexico) has been preliminary identified as a blind hot-dry rock geothermal system. Two drilled wells suggest the existence of high temperatures >300°C and non-conventional tools are been applied to study this system. A comprehensive survey of soil-gas (CO₂) flux measurements (1,500 sites) was carried out during the dry seasons over almost two years (2015 and 2016). Isotopic analyses of δ¹³CCO₂ were performed to discriminate the origin source of the CO2 fluxes. The soil CO2 flux measurements were made in situ by the accumulation chamber method, whereas gas samples for δ13CCO2 were selectively collected from the accumulation chamber with evacuated gas vials via a septum. Two anomalous geothermal zones were identified as a result of these campaigns: Los Azufres (19°55'29.4'' N; 98°08'39.9'' W; 2,839 masl) and Alcaparrosa (19°55'20.6'' N; 98°08'38.3'' W; 2,845 masl). To elucidate the origin of the C in soil CO₂ fluxes, the isotopic signature of δ¹³C was used. Graphical Statistical Analysis (GSA) and a three end-member mixing diagram were used to corroborate the presence of distinctive statistical samples, and trends for the diffuse gas fluxes. Spatial and temporal distributions of the CO₂ fluxes were studied. High CO₂ emission rates up to 38,217 g/m2/d and 33,706 g/m2/d were measured for the Los Azufres and Alcaparrosa respectively; whereas the δ¹³C signatures showed values ranging from -3.4 to -5.5 o/oo for both zones, confirming their magmatic origin. This study has provided a valuable framework to set the direction of further exploration campaigns in the Acoculco caldera. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: accumulation chamber method, carbon dioxide, diffusive degassing, geothermal exploration
Procedia PDF Downloads 2661879 Effects of Inlet Filtration Pressure Loss on Single and Two-Spool Gas Turbine
Authors: Enyia James Diwa, Dodeye Ina Igbong, Archibong Archibong Eso
Abstract:
Gas turbine operators have been faced with the dramatic financial setback resulting from compressor fouling. In a highly deregulated power industry where there is stiffness in the market competition, has made it imperative to improvise means of reducing maintenance cost in other to yield maximum profit. Compressor fouling results from the deposition of contaminants in the presence of oil and moisture on the compressor blade or annulus surfaces, which leads to a loss in flow capacity and compressor efficiency. These combined effects reduce power output, increase heat rate and cause creep life reduction. This paper also contains a model of two gas turbine engines via Cranfield University software known as TURBOMATCH, which is simulation software for detecting engine fouling rate. The model engines are of different configurations and capacities, and are operating in two different modes of constant output power and turbine inlet temperature for a two and three stage filter system. The idea is to investigate the more economically viable filtration systems by gas turbine users based on performance only. It has been demonstrated in the results that the two spool engine is a little more beneficial compared to the single spool. This is as a result of a higher pressure ratio of the two spools as well as the deceleration of the high-pressure compressor and high-pressure turbine speed in a constant TET. Meanwhile, the inlet filtration system was properly designed and balanced with a well-timed and economical compressor washing regime/scheme to control compressor fouling. The different technologies of inlet air filtration and compressor washing are considered and an attempt at optimization with respect to the cost of a combination of both control measures are made.Keywords: inlet filtration, pressure loss, single spool, two spool
Procedia PDF Downloads 3251878 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)
Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili
Abstract:
The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals
Procedia PDF Downloads 1811877 Development and Characterization Self-Nanoemulsifying Drug Delivery Systems of Poorly Soluble Drug Dutasteride
Authors: Rajinikanth Siddalingam, Poonguzhali Subramanian
Abstract:
The present study aims to prepare and evaluate the self-nano emulsifying drug delivery (SNEDDS) system to enhance the dissolution rate of a poorly soluble drug dutasteride. The formulation was prepared using capryol PGMC, Cremophor EL, and polyethylene glycol (PEG) 400 as oil, surfactant and co-surfactant, respectively. The pseudo-ternary phase diagrams with presence and absence of drug were plotted to find out the nano emulsification range and also to evaluate the effect of dutasteride on the emulsification behavior of the phases. Prepared SNEDDS formulations were evaluated for its particle size distribution, nano emulsifying properties, robustness to dilution, self-emulsification time, turbidity measurement, drug content and in-vitro dissolution. The optimized formulations are further evaluated for heating cooling cycle, centrifugation studies, freeze-thaw cycling, particle size distribution and zeta potential were carried out to confirm the stability of the formed SNEDDS formulations. The particle size, zeta potential and polydispersity index of the optimized formulation found to be 35.45 nm, -15.45 and 0.19, respectively. The in vitro results are revealed that the prepared formulation enhanced the dissolution rate of dutasteride significantly as compared with pure drug. The in vivo studies in was conducted using rats and the results are revealed that SNEDDS formulation has enhanced the bioavailability of dutasteride drug significantly as compared with raw drug. Based the results, it was concluded that the dutasteride-loaded SNEDDS shows potential to enhance the dissolution of dutasteride, thus improving the bioavailability and therapeutic effects.Keywords: self-emulsifying drug delivery system, dutasteride, enhancement of bioavailability, dissolution enhancement
Procedia PDF Downloads 2701876 Recovery in Serious Mental Illness: Perception of Health Care Trainees in Morocco
Authors: Sophia El Ouazzani, Amer M. Burhan, Mary Wickenden
Abstract:
Background: Despite improvements in recent years, the Moroccan mental healthcare system still face disparity between available resources and the current population’sneeds. The societal stigma, and limited economic, political, and human resources are all factors in shaping the psychiatric system, exacerbating the discontinuity of services for users after discharged from the hospital. As a result, limited opportunities for social inclusion and meaningful community engagement undermines human rights and recovery potential for people with mental health problems, especially those with psychiatric disabilities from serious mental illness (SMI). Recovery-oriented practice, such as mental health rehabilitation, addresses the complex needs of patients with SMI and support their community inclusion. The cultural acceptability of recovery-oriented practice is an important notion to consider for a successful implementation. Exploring the extent to which recovery-oriented practices are used in Morocco is a necessary first step to assess the cultural relevance of such a practice model. Aims: This study aims to explore understanding and knowledge, perception, and perspective about core concepts in mental health rehabilitation, including psychiatric disability, recovery, and engagement in meaningful occupations for people with SMI in Morocco. Methods: A pilot qualitative study was undertaken. Data was collected via semi-structured interviews and focusgroup discussions with healthcare professional students. Questions were organised around the following themes: 1) students’ perceptions, understanding, and expectations around concepts such as SMI, mental health disability, and recovery, and 2) changes in their views and expectations after starting their professional training. Further analysis of students’ perspectives on the concept of ‘meaningful occupation’ and how is this viewed within the context of the research questions was done. The data was extracted using an inductive thematic analysis approach. This is a pilot stage of a doctoral project, further data will be collected and analysed until saturation is reached. Results: A total of eight students were included in this study which included occupational therapy and mental health nursing students receiving training in Morocco. The following themes emerged as influencing students’ perceptions and views around the main concepts: 1) Stigma and discrimination, 2) Fatalism and low expectations, 3) Gendered perceptions, 4) Religious causation, 5) Family involvement, 6) Professional background, 7) Inaccessibility of services and treatment. Discussion/Contribution: Preliminary analysis of the data suggests that students’ perceptions changed after gaining more clinical experiences and being exposed to people with psychiatric disabilities. Prior to their training, stigma shaped greatly how they viewed people with SMI. The fear, misunderstanding, and shame around SMI and their functional capacities may contribute to people with SMI being stigmatizedand marginalised from their family and their community. Religious causations associated to SMIsare understood as further deepening the social stigma around psychiatric disability. Perceptions are influenced by gender, with women being doubly discriminated against in relation to recovery opportunities. Therapeutic pessimism seems to persist amongst students and within the mental healthcare system in general and regarding the recovery potential and opportunities for people with SMI. The limited resources, fatalism, and stigma all contribute to the low expectations for recovery and community inclusion. Implications and future directions will be discussed.Keywords: disability, mental health rehabilitation, recovery, serious mental illness, transcultural psychiatry
Procedia PDF Downloads 1451875 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 3431874 From Parchment to Pixels: Digital Preservation for the Future
Authors: Abida Khatoon
Abstract:
This study provides an overview of ancient manuscripts, including their historical significance, current digital preservation methods, and the challenges we face in safeguarding these invaluable resources. India has a long-standing tradition of manuscript preservation, with texts that span a wide range of subjects, from religious scriptures to scientific treatises. These manuscripts were written on various materials, including palm leaves, parchment, metal, bark, wood, animal skin, and paper. These manuscripts offer a deep insight into India's cultural and intellectual history. Ancient manuscripts are crucial historical records, providing valuable insights into past civilizations and knowledge systems. As these physical documents become increasingly fragile, digital preservation methods have become essential to ensure their continued accessibility. Digital preservation involves several key techniques. Scanning and digitization create high-resolution digital images of manuscripts, while reprography produces copies to reduce wear on originals. Digital archiving ensures proper storage and management of these digital files, and preservation of electronic data addresses modern formats like web pages and emails. Despite its benefits, digital preservation faces several challenges. Technological obsolescence, data integrity issues, and the resource-intensive nature of the process are significant hurdles. Securing adequate funding is particularly challenging due to high initial costs and ongoing expenses. Looking ahead, the future of digital preservation is promising. Advancements in technology, increased collaboration among institutions, and the development of sustainable funding models will enhance the preservation and accessibility of these important historical documents.Keywords: preservation strategies, Indian manuscript, cultural heritage, archiving
Procedia PDF Downloads 251873 Impact of Tillage and Crop Establishment on Fertility and Sustainability of the Rice-Wheat Cropping System in Inceptisols of Varanasi, Up, India
Authors: Pramod Kumar Sharma, Pratibha Kumari, Udai Pratap Singh, Sustainability
Abstract:
In the Indo-Gangetic Plains of South-East Asia, the rice-wheat cropping system (RWCS) is dominant with conventional tillage (CT) without residue management, which shows depletion of soil fertility and non-sustainable crop productivity. Hence, this investigation was planned to identify suitable natural resource management practices involving different tillage and crop establishment (TCE) methods along with crop residue and their effects, on the sustainability of dominant cropping systems through enhancing soil fertility and productivity. This study was conducted for two consecutive years 2018-19 and 2019-20 on a long-term field experiment that was started in the year 2015-16 taking six different combinations of TCE methods viz. CT, partial conservation agriculture (PCA) i.e. anchored residue of rice and full conservation agriculture (FCA)] i.e. anchored residue of rice and wheat under RWCS in terms of crop productivity, sustainability of soil health, and crop nutrition by the crops. Results showed that zero tillage direct-seeded rice (ZTDSR) - zero tillage wheat (ZTW) [FCA + green gram residue retention (RR)] recorded the highest yield attributes and yield during both the crops. Compared to conventional tillage rice (CTR)-conventional tillage wheat (CTW) [residue removal (R 0 )], the soil quality parameters were improved significantly with ZTDSR-ZTW (FCA+RR). Overall, ZTDSR-ZTW (FCA+RR) had higher nutrient uptake by the crops than CT-based treatment CTR-CTW (R 0 ) and CTR-CTW (RI).These results showed that there is significant profitability of yield and resource utilization by the adoption of FCA it may be a better alternative to the dominant tillage system i.e. CT in RWSC.Keywords: tillage and crop establishment, soil fertility, rice-wheat cropping system, sustainability
Procedia PDF Downloads 1091872 Towards a Biologically Relevant Tumor-on-a-Chip: Multiplex Microfluidic Platform to Study Breast Cancer Drug Response
Authors: Soroosh Torabi, Brad Berron, Ren Xu, Christine Trinkle
Abstract:
Microfluidics integrated with 3D cell culture is a powerful technology to mimic cellular environment, and can be used to study cell activities such as proliferation, migration and response to drugs. This technology has gained more attention in cancer studies over the past years, and many organ-on-a-chip systems have been developed to study cancer cell behaviors in an ex-vivo tumor microenvironment. However, there are still some barriers to adoption which include low throughput, complexity in 3D cell culture integration and limitations on non-optical analysis of cells. In this study, a user-friendly microfluidic multi-well plate was developed to mimic the in vivo tumor microenvironment. The microfluidic platform feeds multiple 3D cell culture sites at the same time which enhances the throughput of the system. The platform uses hydrophobic Cassie-Baxter surfaces created by microchannels to enable convenient loading of hydrogel/cell suspensions into the device, while providing barrier free placement of the hydrogel and cells adjacent to the fluidic path. The microchannels support convective flow and diffusion of nutrients to the cells and a removable lid is used to enable further chemical and physiological analysis on the cells. Different breast cancer cell lines were cultured in the device and then monitored to characterize nutrient delivery to the cells as well as cell invasion and proliferation. In addition, the drug response of breast cancer cell lines cultured in the device was compared to the response in xenograft models to the same drugs to analyze relevance of this platform for use in future drug-response studies.Keywords: microfluidics, multi-well 3d cell culture, tumor microenvironment, tumor-on-a-chip
Procedia PDF Downloads 2661871 How the Writer Tells the Story Should Be the Primary Concern rather than Who Can Write about Whom: The Limits of Cultural Appropriation Vis-à-Vis The Ethics of Narrative Empathy
Authors: Alexandra Cheira
Abstract:
Cultural appropriation has been theorised as a form of colonialism in which members of a dominant culture reduce cultural elements that are deeply meaningful to a minority culture to the category of the “exotic other” since they do not experience the oppression and discriminations faced by members of the minority culture. Yet, in the particular case of literature, writers such as Lionel Shriver and Bernardine Evaristo have argued that authors from a cultural majority have a right to write in the voice of someone from a cultural minority, hence attacking the idea that this is a form of cultural appropriation. By definition, Shriver and Evaristo claim, writers are supposed to write beyond their own culture, gender, class, and/ or race. In this light, this paper discusses the limits of cultural appropriation vis-à-vis the ethics of narrative empathy by addressing the mixed critical reception of Kathryn Stockett’s The Help (2009) and Jeanine Cummins’s American Dirt (2020). In fact, both novels were acclaimed as global eye-openers regarding the struggles of respectively South American migrants and African American maids. At the same time, both novelists have been accused of cultural appropriation by telling a story that is not theirs to tell, given the fact that they are white women telling these stories in what critics have argued is really an American voice telling a story to American readers.These claims will be investigated within the framework of Edward Said’s foundational examination of Orientalism in the field of postcolonial studies as a Western style for authoritatively restructuring the Orient. This means that Orientalist stereotypes regarding Eastern cultures have implicitly validated colonial and imperial pursuits, in the specific context of literary representations of African American and Mexican cultures by white writers. At the same time, the conflicted reception of American Dirt and The Help will be examined within the critical framework of narrative empathy as theorised by Suzanne Keen. Hence, there will be a particular focus on the way a reader’s heated perception that the author’s perspective is purely dishonest can result from a friction between an author’s intention and a reader’s experience of narrative empathy, while a shared sense of empathy between authors and readers can be a rousing momentum to move beyond literary response to social action.Finally, in order to assess that “the key question should not be who can write about whom, but how the writer tells the story”, the recent controversy surrounding Dutch author Marieke Lucas Rijneveld’s decision to resign the translation of American poet Amanda Gorman’s work into Dutch will be duly investigated. In fact, Rijneveld stepped out after journalist and activist Janice Deul criticised Dutch publisher Meulenhoff for choosing a translator who was not also Black, despite the fact that 22-year-old Gorman had selected the 29-year-old Rijneveld herself, as a fellow young writer who had likewise come to fame early on in life. In this light, the critical argument that the controversial reception of The Help reveals as much about US race relations in the early twenty-first century as about the complex literary transactions between individual readers and the novel itself will also be discussed in the extended context of American Dirt and white author Marieke Rijneveld’s withdrawal from the projected translation of Black poet Amanda Gorman.Keywords: cultural appropriation, cultural stereotypes, narrative empathy, race relations
Procedia PDF Downloads 741870 State of Emergency in Turkey (July 2016-July 2018): A Case of Utilization of Law as a Political Instrument
Authors: Neslihan Cetin
Abstract:
In this study, we will aim to analyze how the period of the state of emergency in Turkey lead to gaps in law and the formation of areas in which there was a complete lack of supervision. The state of emergency that was proclaimed following the coup attempt of July 15, 2016, continued until July 18, 2018, that is to say, 2 years, without taking into account whether the initial circumstances persisted. As part of this work, we claim that the state of emergency provided the executive power with important tools for governing, which it took constant use. We can highlight how the concern for security at the center of the basic considerations of the people in a city was exploited as a foundation by the military power in Turkey to interfere in the political, legal, and social spheres. The constitutions of 1924, 1961, and 1982 entrusted the army with the role of protector of the integrity of the state. This became an instrument at the hands of the military to legitimize their interventions in the name of public security. Its interventions in the political field are indeed politically motivated. The constitution, the legislative, and regulatory systems are modified and monopolized by the military power that dominates the legislative, regulatory, and judicial power, leading to a state of exception. With the political convulsions over a decade, the government was able to usurp the instrument called the state of exception. In particular, the decree-laws of the state of emergency, which the executive makes frequent and generally abusive use, became instruments in the hands of the government to take measures that it wishes to escape from the rules and the pre-established control mechanisms. Thus the struggle against the political opposition becomes more unbalanced and destructive. To this must also be added the ineffectiveness of ex-post controls and domestic remedies. This research allows us to stress how a legal concept, such as ‘the state of emergency’ can be politically exploited to make it a legal weapon that continues to produce victims.Keywords: constitutional law, state of emergency, rule of law, instrumentalization of law
Procedia PDF Downloads 1451869 Organic Contaminant Degradation Using H₂O₂ Activated Biochar with Enhanced Persistent Free Radicals
Authors: Kalyani Mer
Abstract:
Hydrogen peroxide (H₂O₂) is one of the most efficient and commonly used oxidants in in-situ chemical oxidation (ISCO) of organic contaminants. In the present study, we investigated the activation of H₂O₂ by heavy metal (nickel and lead metal ions) loaded biochar for phenol degradation in an aqueous solution (concentration = 100 mg/L). It was found that H₂O₂ can be effectively activated by biochar, which produces hydroxyl (•OH) radicals owing to an increase in the formation of persistent free radicals (PFRs) on biochar surface. Ultrasound treated (30s duration) biochar, chemically activated by 30% phosphoric acid and functionalized by diethanolamine (DEA) was used for the adsorption of heavy metal ions from aqueous solutions. It was found that modified biochar could remove almost 60% of nickel in eight hours; however, for lead, the removal efficiency reached up to 95% for the same time duration. The heavy metal loaded biochar was further used for the degradation of phenol in the absence and presence of H₂O₂ (20 mM), within 4 hours of reaction time. The removal efficiency values for phenol in the presence of H₂O₂ were 80.3% and 61.9%, respectively, by modified biochar loaded with nickel and lead metal ions. These results suggested that the biochar loaded with nickel exhibits a better removal capacity towards phenol than the lead loaded biochar when used in H₂O₂ based oxidation systems. Meanwhile, control experiments were set in the absence of any activating biochar, and the removal efficiency was found to be 19.1% when only H₂O₂ was added in the reaction solution. Overall, the proposed approach serves a dual purpose of using biochar for heavy metal ion removal and treatment of organic contaminants by further using the metal loaded biochar for H₂O₂ activation in ISCO processes.Keywords: biochar, ultrasound, heavy metals, in-situ chemical oxidation, chemical activation
Procedia PDF Downloads 1381868 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties
Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda
Abstract:
This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties
Procedia PDF Downloads 681867 The Feasibility Evaluation Of The Compressed Air Energy Storage System In The Porous Media Reservoir
Authors: Ming-Hong Chen
Abstract:
In the study, the mechanical and financial feasibility for the compressed air energy storage (CAES) system in the porous media reservoir in Taiwan is evaluated. In 2035, Taiwan aims to install 16.7 GW of wind power and 40 GW of photovoltaic (PV) capacity. However, renewable energy sources often generate more electricity than needed, particularly during winter. Consequently, Taiwan requires long-term, large-scale energy storage systems to ensure the security and stability of its power grid. Currently, the primary large-scale energy storage options are Pumped Hydro Storage (PHS) and Compressed Air Energy Storage (CAES). Taiwan has not ventured into CAES-related technologies due to geological and cost constraints. However, with the imperative of achieving net-zero carbon emissions by 2050, there's a substantial need for the development of a considerable amount of renewable energy. PHS has matured, boasting an overall installed capacity of 4.68 GW. CAES, presenting a similar scale and power generation duration to PHS, is now under consideration. Taiwan's geological composition, being a porous medium unlike salt caves, introduces flow field resistance affecting gas injection and extraction. This study employs a program analysis model to establish the system performance analysis capabilities of CAES. The finite volume model is then used to assess the impact of porous media, and the findings are fed back into the system performance analysis for correction. Subsequently, the financial implications are calculated and compared with existing literature. For Taiwan, the strategic development of CAES technology is crucial, not only for meeting energy needs but also for decentralizing energy allocation, a feature of great significance in regions lacking alternative natural resources.Keywords: compressed-air energy storage, efficiency, porous media, financial feasibility
Procedia PDF Downloads 691866 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4241865 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 241864 Energy and Carbon Footprint Analysis of Food Waste Treatment Alternatives for Hong Kong
Authors: Asad Iqbal, Feixiang Zan, Xiaoming Liu, Guang-Hao Chen
Abstract:
Water, food, and energy nexus is a vital subject to achieve sustainable development goals worldwide. Wastewater (WW) and food waste (FW) from municipal sources are primary contributors to their respective wastage sum from a country. Along with the loss of these invaluable natural resources, their treatment systems also consume a lot of abiotic energy and resources input with a perceptible contribution to global warming. Hence, the global paradigm has evolved from simple pollution mitigation to a resource recovery system (RRS). In this study, the prospects of six alternative FW treatment scenarios are quantitatively evaluated for Hong Kong in terms of energy use and greenhouse emissions (GHEs) potential, using life cycle assessment (LCA). Considered scenarios included: aerobic composting, anaerobic digestion (AD), combine AD and composting (ADC), co-disposal, and treatment with wastewater (CoD-WW), incineration, and conventional landfilling as base-case. Results revealed that in terms of GHEs saving, all-new scenarios performed significantly better than conventional landfilling, with ADC scenario as best-case and incineration, AD alone, CoD-WW ranked as second, third, and fourth best respectively. Whereas, composting was the worst-case scenario in terms of energy balance, while incineration ranked best and AD alone, ADC, and CoD-WW ranked as second, third, and fourth best, respectively. However, these results are highly sensitive to boundary settings, e.g., the inclusion of the impact of biogenic carbon emissions and waste collection and transportation, and several other influential parameters. The study provides valuable insights and policy guidelines for the decision-makers locally and a generic modelling template for environmental impact assessment.Keywords: food waste, resource recovery, greenhouse emissions, energy balance
Procedia PDF Downloads 110