Search results for: step-wise linear regression
39 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 22738 Application of Aerogeomagnetic and Ground Magnetic Surveys for Deep-Seated Kimberlite Pipes in Central India
Authors: Utkarsh Tripathi, Bikalp C. Mandal, Ravi Kumar Umrao, Sirsha Das, M. K. Bhowmic, Joyesh Bagchi, Hemant Kumar
Abstract:
The Central India Diamond Province (CIDP) is known for the occurrences of primary and secondary sources for diamonds from the Vindhyan platformal sediments, which host several kimberlites, with one operating mine. The known kimberlites are Neo-Proterozoic in age and intrude into the Kaimur Group of rocks. Based on the interpretation of areo-geomagnetic data, three potential zones were demarcated in parts of Chitrakoot and Banda districts, Uttar Pradesh, and Satna district, Madhya Pradesh, India. To validate the aero-geomagnetic interpretation, ground magnetic coupled with a gravity survey was conducted to validate the anomaly and explore the possibility of some pipes concealed beneath the Vindhyan sedimentary cover. Geologically the area exposes the milky white to buff-colored arkosic and arenitic sandstone belonging to the Dhandraul Formation of the Kaimur Group, which are undeformed and unmetamorphosed providing almost transparent media for geophysical exploration. There is neither surface nor any geophysical indication of intersections of linear structures, but the joint patterns depict three principal joints along NNE-SSW, ENE-WSW, and NW-SE directions with vertical to sub-vertical dips. Aeromagnetic data interpretation brings out three promising zones with the bi-polar magnetic anomaly (69-602nT) that represent potential kimberlite intrusive concealed below at an approximate depth of 150-170m. The ground magnetic survey has brought out the above-mentioned anomalies in zone-I, which is congruent with the available aero-geophysical data. The magnetic anomaly map shows a total variation of 741 nT over the area. Two very high magnetic zones (H1 and H2) have been observed with around 500 nT and 400 nT magnitudes, respectively. Anomaly zone H1 is located in the west-central part of the area, south of Madulihai village, while anomaly zone H2 is located 2km apart in the north-eastern direction. The Euler 3D solution map indicates the possible existence of the ultramafic body in both the magnetic highs (H1 and H2). The H2 high shows the shallow depth, and H1 shows a deeper depth solution. In the reduced-to-pole (RTP) method, the bipolar anomaly disappears and indicates the existence of one causative source for both anomalies, which is, in all probabilities, an ultramafic suite of rock. The H1 magnetic high represents the main body, which persists up to depths of ~500m, as depicted through the upward continuation derivative map. Radially Averaged Power Spectrum (RAPS) shows the thickness of loose sediments up to 25m with a cumulative depth of 154m for sandstone overlying the ultramafic body. The average depth range of the shallower body (H2) is 60.5-86 meters, as estimated through the Peters half slope method. Magnetic (TF) anomaly with BA contour also shows high BA value around the high zones of magnetic anomaly (H1 and H2), which suggests that the causative body is with higher density and susceptibility for the surrounding host rock. The ground magnetic survey coupled with the gravity confirms a potential target for further exploration as the findings are co-relatable with the presence of the known diamondiferous kimberlites in this region, which post-date the rocks of the Kaimur Group.Keywords: Kaimur, kimberlite, Euler 3D solution, magnetic
Procedia PDF Downloads 7337 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 13236 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 17635 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 14334 A Multiple Freezing/Thawing Cycles Influence Internal Structure and Mechanical Properties of Achilles Tendon
Authors: Martyna Ekiert, Natalia Grzechnik, Joanna Karbowniczek, Urszula Stachewicz, Andrzej Mlyniec
Abstract:
Tendon grafting is a common procedure performed to treat tendon rupture. Before the surgical procedure, tissues intended for grafts (i.e., Achilles tendon) are stored in ultra-low temperatures for a long time and also may be subjected to unfavorable conditions, such as repetitive freezing (F) and thawing (T). Such storage protocols may highly influence the graft mechanical properties, decrease its functionality and thus increase the risk of complications during the transplant procedure. The literature reports on the influence of multiple F/T cycles on internal structure and mechanical properties of tendons stay inconclusive, confirming and denying the negative influence of multiple F/T at the same time. An inconsistent research methodology and lack of clear limit of F/T cycles, which disqualifies tissue for surgical graft purposes, encouraged us to investigate the issue of multiple F/T cycles by the mean of biomechanical tensile tests supported with Scanning Electron Microscope (SEM) imaging. The study was conducted on male bovine Achilles tendon-derived from the local abattoir. Fresh tendons were cleaned of excessive membranes and then sectioned to obtained fascicle bundles. Collected samples were randomly assigned to 6 groups subjected to 1, 2, 4, 6, 8 and 12 cycles of freezing-thawing (F/T), respectively. Each F/T cycle included deep freezing at -80°C temperature, followed by thawing at room temperature. After final thawing, thin slices of the side part of samples subjected to 1, 4, 8 and 12 F/T cycles were collected for SEM imaging. Then, the width and thickness of all samples were measured to calculate the cross-sectional area. Biomechanical tests were performed using the universal testing machine (model Instron 8872, INSTRON®, Norwood, Massachusetts, USA) using a load cell with a maximum capacity of 250 kN and standard atmospheric conditions. Both ends of each fascicle bundle were manually clamped in grasping clamps using abrasive paper and wet cellulose wadding swabs to prevent tissue slipping while clamping and testing. Samples were subjected to the testing procedure including pre-loading, pre-cycling, loading, holding and unloading steps to obtain stress-strain curves for representing tendon stretching and relaxation. The stiffness of AT fascicles bundle samples was evaluated in terms of modulus of elasticity (Young’s modulus), calculated from the slope of the linear region of stress-strain curves. SEM imaging was preceded by chemical sample preparation including 24hr fixation in 3% glutaraldehyde buffered with 0.1 M phosphate buffer, washing with 0.1 M phosphate buffer solution and dehydration in a graded ethanol solution. SEM images (Merlin Gemini II microscope, ZEISS®) were taken using 30 000x mag, which allowed measuring a diameter of collagen fibrils. The results confirm a decrease in fascicle bundles Young’s modulus as well as a decrease in the diameter of collagen fibrils. These results confirm the negative influence of multiple F/T cycles on the mechanical properties of tendon tissue.Keywords: biomechanics, collagen, fascicle bundles, soft tissue
Procedia PDF Downloads 12333 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 13232 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology
Authors: Simrin Kafle, Ulrika Enemark
Abstract:
Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal
Procedia PDF Downloads 2331 Biostabilisation of Sediments for the Protection of Marine Infrastructure from Scour
Authors: Rob Schindler
Abstract:
Industry-standard methods of mitigating erosion of seabed sediments rely on ‘hard engineering’ approaches which have numerous environmental shortcomings: (1) direct loss of habitat by smothering of benthic species, (2) disruption of sediment transport processes, damaging geomorphic and ecosystem functionality (3) generation of secondary erosion problems, (4) introduction of material that may propagate non-local species, and (5) provision of pathways for the spread of invasive species. Recent studies have also revealed the importance of biological cohesion, the result of naturally occurring extra-cellular polymeric substances (EPS), in stabilizing natural sediments. Mimicking the strong bonding kinetics through the deliberate addition of EPS to sediments – henceforth termed ‘biostabilisation’ - offers a means in which to mitigate against erosion induced by structures or episodic increases in hydrodynamic forcing (e.g. storms and floods) whilst avoiding, or reducing, hard engineering. Here we present unique experiments that systematically examine how biostabilisation reduces scour around a monopile in a current, a first step to realizing the potential of this new method of scouring reduction for a wide range of engineering purposes in aquatic substrates. Experiments were performed in Plymouth University’s recirculating sediment flume which includes a recessed scour pit. The model monopile was 0.048 m in diameter, D. Assuming a prototype monopile diameter of 2.0 m yields a geometric ratio of 41.67. When applied to a 10 m prototype water depth this yields a model depth, d, of 0.24 m. The sediment pit containing the monopile was filled with different biostabilised substrata prepared using a mixture of fine sand (D50 = 230 μm) and EPS (Xanthan gum). Nine sand-EPS mixtures were examined spanning EPS contents of 0.0% < b0 < 0.50%. Scour development was measured using a laser point gauge along a 530 mm centreline at 10 mm increments at regular periods over 5 h. Maximum scour depth and excavated area were determined at different time steps and plotted against time to yield equilibrium values. After 5 hours the current was stopped and a detailed scan of the final scour morphology was taken. Results show that increasing EPS content causes a progressive reduction in the equilibrium depth and lateral extent of scour, and hence excavated material. Very small amounts equating to natural communities (< 0.1% by mass) reduce scour rate, depth and extent of scour around monopiles. Furthermore, the strong linear relationships between EPS content, equilibrium scour depth, excavation area and timescales of scouring offer a simple index on which to modify existing scour prediction methods. We conclude that the biostabilisation of sediments with EPS may offer a simple, cost-effective and ecologically sensitive means of reducing scour in a range of contexts including OWFs, bridge piers, pipeline installation, and void filling in rock armour. Biostabilisation may also reduce economic costs through (1) Use of existing site sediments, or waste dredged sediments (2) Reduced fabrication of materials, (3) Lower transport costs, (4) Less dependence on specialist vessels and precise sub-sea assembly. Further, its potential environmental credentials may allow sensitive use of the seabed in marine protection zones across the globe.Keywords: biostabilisation, EPS, marine, scour
Procedia PDF Downloads 16530 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing
Authors: Kashima Arora, Monika Tomar, Vinay Gupta
Abstract:
Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film
Procedia PDF Downloads 18029 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 7328 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design
Authors: H. K. Esfahani, B. Datta
Abstract:
Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site
Procedia PDF Downloads 23027 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints
Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes
Abstract:
Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.Keywords: circular bioeconomy, interior region of portugal, regional development., public policy
Procedia PDF Downloads 9126 White-Rot Fungi Phellinus as a Source of Antioxidant and Antitumor Agents
Authors: Yogesh Dalvi, Ruby Varghese, Nibu Varghese, C. K. Krishnan Nair
Abstract:
Introduction: The Genus Phellinus, locally known as Phansomba is a well-known traditional folk medicine. Especially, in Western Ghats of India, many tribes use several species of Phellinus for various ailments related to teeth, throat, tongue, stomach and even wound healing. It is one of the few mushrooms which play a pivotal role in Ayurvedic Dravyaguna. Aim: The present study focuses on to investigate phytochemical analysis, antioxidant, and antitumor (in vitro and in vivo) potential of Phellinus robinae from South India, Kerala Material and Methods: The present study explores the following: 1. Phellinus samples were collected from Ranni, Pathanamthitta district of Kerala state, India from Artocarpus heterophyllus Lam. and species were identified using rDNA region. 2. The fruiting body was shadow dried, powdered and extracted with 50% alcohol using water bath at 60°C which was further condensed by rotary evaporator and lyophilized at minus 40°C temperature. 3. Secondary metabolites were analyzed by using various phytochemical screening assay (Hager’s Test, Wagner’s Test, Sodium hydroxide Test, Lead acetate Test, Ferric chloride Test, Folin-ciocalteu Test, Foaming Test, Benedict’s test, Fehling’s Test and Lowry’s Test). 4. Antioxidant and free radical scavenging activity were analyzed by DPPH, FRAP and Iron chelating assay. 5. The antitumor potential of Water alcohol extract of Phellinus (PAWE) is evaluated through In vitro condition by Trypan blue dye exclusion method in DLA cell line and In vivo by murine model. Result and Discussion: Preliminary phytochemical screening by various biochemical tests revealed presence of a variety of active secondary molecules like alkaloids, flavanoids, saponins, carbohydrate, protein and phenol. In DPPH and FRAP assay PAWE showed significantly higher antioxidant activity as compared to standard Ascorbic acid. While, in Iron chelating assay, PAWE exhibits similar antioxidant activity that of Butylated Hydroxytoluene (BHT) as standard. Further, in the in vitro study, PAWE showed significant inhibition on DLA cell proliferation in dose dependent manner and showed no toxicity on mice splenocytes, when compared to standard chemotherapy drug doxorubicin. In vivo study, oral administration of PAWE showed dose dependent tumor regression in mice and also raised the immunogenicity by restoring levels of antioxidant enzymes in liver and kidney tissue. In both in vitro and in vivo gene expression studies PAWE up-regulates pro-apoptotic genes (Bax, Caspases 3, 8 and 9) and down- regulates anti-apoptotic genes (Bcl2). PAWE also down regulates inflammatory gene (Cox-2) and angiogenic gene (VEGF). Conclusion: Preliminary phytochemical screening revealed that PAWE contains various secondary metabolites which contribute to its antioxidant and free radical scavenging property as evaluated by DPPH, FRAP and Iron chelating assay. PAWE exhibits anti-proliferative activity by the induction of apoptosis through a signaling cascade of death receptor-mediated extrinsic (Caspase8 and Tnf-α), as well as mitochondria-mediated intrinsic (caspase9) and caspase pathways (Caspase3, 8 and 9) and also by regressing angiogenic factor (VEGF) without any inflammation or adverse side effects. Hence, PAWE serve as a potential antioxidant and antitumor agent.Keywords: antioxidant, antitumor, Dalton lymphoma ascites (DLA), fungi, Phellinus robinae
Procedia PDF Downloads 30125 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 18924 Source of Professionalism and Knowledge among Sport Industry Professionals in India with Limited Sport Management Higher Education
Authors: Sandhya Manjunath
Abstract:
The World Association for Sport Management (WASM) was established in 2012, and its mission is "to facilitate sport management research, teaching, and learning excellence and professional practice worldwide". As the field of sport management evolves, it have seen increasing globalization of not only the sport product but many educators have also internationalized courses and curriculums. Curricula should reflect globally recognized issues and disseminate specific intercultural knowledge, skills, and practices, but regional disparities still exist. For example, while India has some of the most ardent sports fans and events in the world, sport management education programs and the development of a proper curriculum in India are still in their nascent stages, especially in comparison to the United States and Europe. Using the extant literature on professionalization and institutional theory, this study aims to investigate the source of knowledge and professionalism of sports managers in India with limited sport management education programs and to subsequently develop a conceptual framework that addresses any gaps or disparities across regions. This study will contribute to WASM's (2022) mission statement of research practice worldwide, specifically to fill the existing disparities between regions. Additionally, this study may emphasize the value of higher education among professionals entering the workforce in the sport industry. Most importantly, this will be a pioneer study highlighting the social issue of limited sport management higher education programs in India and improving professional research practices. Sport management became a field of study in the 1980s, and scholars have studied its professionalization since this time. Dowling, Edwards, & Washington (2013) suggest that professionalization can be categorized into three broad categories of organizational, systemic, and occupational professionalization. However, scant research has integrated the concept of professionalization with institutional theory. A comprehensive review of the literature reveals that sports industry research is progressing in every country worldwide at its own pace. However, there is very little research evidence about the Indian sports industry and the country's limited higher education sport management programs. A growing need exists for sports scholars to pursue research in developing countries like India to develop theoretical frameworks and academic instruments to evaluate the current standards of qualified professionals in sport management, sport marketing, venue and facilities management, sport governance, and development-related activities. This study may postulate a model highlighting the value of higher education in sports management. Education stakeholders include governments, sports organizations and their representatives, educational institutions, and accrediting bodies. As these stakeholders work collaboratively in developed countries like the United States and Europe and developing countries like India, they simultaneously influence the professionalization (i.e., organizational, systemic, and occupational) of sport management education globally. The results of this quantitative study will investigate the current standards of education in India and the source of knowledge among industry professionals. Sports industry professionals will be randomly selected to complete the COSM survey on PsychData and rate their perceived knowledge and professionalism on a Likert scale. Additionally, they will answer questions involving their competencies, experience, or challenges in contributing to Indian sports management research. Multivariate regression will be used to measure the degree to which the various independent variables impact the current knowledge, contribution to research, and professionalism of India's sports industry professionals. This quantitative study will contribute to the limited academic literature available to Indian sports practitioners. Additionally, it shall synthesize knowledge from previous work on professionalism and institutional knowledge, providing a springboard for new research that will fill the existing knowledge gaps. While a further empirical investigation is warranted, our conceptualization contributes to and highlights India's burgeoning sport management industry.Keywords: sport management, professionalism, source of knowledge, higher education, India
Procedia PDF Downloads 6923 Pisolite Type Azurite/Malachite Ore in Sandstones at the Base of the Miocene in Northern Sardinia: The Authigenic Hypothesis
Authors: S. Fadda, M. Fiori, C. Matzuzzi
Abstract:
Mineralized formations in the bottom sediments of a Miocene transgression have been discovered in Sardinia. The mineral assemblage consists of copper sulphides and oxidates suggesting fluctuations of redox conditions in neutral to high-pH restricted shallow-water coastal basins. Azurite/malachite has been observed as authigenic and occurs as loose spheroidal crystalline particles associated with the transitional-littoral horizon forming the bottom of the marine transgression. Many field observations are consistent with a supergenic circulation of metals involving terrestrial groundwater-seawater mixing. Both clastic materials and metals come from Tertiary volcanic edifices while the main precipitating anions, carbonates, and sulphides species are of both continental and marine origin. Formation of Cu carbonates as a supergene secondary 'oxide' assemblage, does not agree with field evidences, petrographic observations along with textural evidences in the host-rock types. Samples were collected along the sedimentary sequence for different analyses: the majority of elements were determined by X-ray fluorescence and plasma-atomic emission spectroscopy. Mineral identification was obtained by X-ray diffractometry and scanning electron microprobe. Thin sections of the samples were examined in microscopy while porosity measurements were made using a mercury intrusion porosimeter. Cu-carbonates deposited at a temperature below 100 C° which is consistent with the clay minerals in the matrix of the host rock dominated by illite and montmorillonite. Azurite nodules grew during the early diagenetic stage through reaction of cupriferous solutions with CO₂ imported from the overlying groundwater and circulating through the sandstones during shallow burial. Decomposition of organic matter in the bottom anoxic waters released additional carbon dioxide to pore fluids for azurite stability. In this manner localized reducing environments were also generated in which Cu was fixed as Cu-sulphide and sulphosalts. Microscopic examinations of textural features of azurite nodules give evidence of primary malachite/azurite deposition rather than supergene oxidation in place of primary sulfides. Photomicrographs show nuclei of azurite and malachite surrounded by newly formed microcrystalline carbonates which constitute the matrix. The typical pleochroism of crystals can be observed also when this mineral fills microscopic fissures or cracks. Sedimentological evidence of transgression and regression indicates that the pore water would have been a variable mixture of marine water and groundwaters with a possible meteoric component in an alternatively exposed and subaqueous environment owing to water-level fluctuation. Salinity data of the pore fluids, assessed at random intervals along the mineralised strata confirmed the values between about 7000 and 30,000 ppm measured in coeval sediments at the base of Miocene falling in the range of a more or less diluted sea water. This suggests a variation in mean pore-fluids pH between 5.5 and 8.5, compatible with the oxidized and reduced mineral paragenesis described in this work. The results of stable isotopes studies reflect the marine transgressive-regressive cyclicity of events and are compatibile with carbon derivation from sea water. During the last oxidative stage of diagenesis, under surface conditions of higher activity of H₂O and O₂, CO₂ partial pressure decreased, and malachite becomes the stable Cu mineral. The potential for these small but high grade deposits does exist.Keywords: sedimentary, Cu-carbonates, authigenic, tertiary, Sardinia
Procedia PDF Downloads 13022 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces
Authors: Martin Alexander Eder, Sergei Semenov
Abstract:
Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.Keywords: adhesive, fatigue, interface, multiaxial stress
Procedia PDF Downloads 16821 Development of Anti-Fouling Surface Features Bioinspired by the Patterned Micro-Textures of the Scophthalmus rhombus (Brill)
Authors: Ivan Maguire, Alan Barrett, Alex Forte, Sandra Kwiatkowska, Rohit Mishra, Jens Ducrèe, Fiona Regan
Abstract:
Biofouling is defined as the gradual accumulation of Biomimetics refers to the use and imitation of principles copied from nature. Biomimetics has found interest across many commercial disciplines. Among many biological objects and their functions, aquatic animals deserve a special attention due to their antimicrobial capabilities resulting from chemical composition, surface topography or other behavioural defences, which can be used as an inspiration for antifouling technology. Marine biofouling has detrimental effects on seagoing vessels, both commercial and leisure, as well as on oceanographic sensors, offshore drilling rigs, and aquaculture installations. Sensor optics, membranes, housings and platforms can become fouled leading to problems with sensor performance and data integrity. While many anti-fouling solutions are currently being investigated as a cost-cutting measure, biofouling settlement may also be prevented by creating a surface that does not satisfy the settlement conditions. Brill (Scophthalmus rhombus) is a small flatfish occurring in marine waters of Mediterranean as well as Norway and Iceland. It inhabits sandy and muddy coastal waters from 5 to 80 meters. Its skin colour changes depending on environment, but generally is brownish with light and dark freckles, with creamy underside. Brill is oval in shape and its flesh is white. The aim of this study is to translate the unique micro-topography of the brill scale, to design marine inspired biomimetic surface coating and test it against a typical fouling organism. Following extensive study of scale topography of the brill fish (Scophthalmus rhombus) and the settlement behaviour of the diatom species Psammodictyon sp. via SEM, two state-of-the-art antifouling surface solutions were designed and investigated; A brill fish scale bioinspired surface pattern platform (BFD), and generic and uniformly-arrayed, circular micropillar platform (MPD), with offsets based on diatom species settlement behaviour. The BFD approach consists of different ~5 μm by ~90 μm Brill-replica patterns, grown to a 5 μm height, in a linear array pattern. The MPD approach utilises hexagonal-packed cylindrical pillars 10.6 μm in diameter, grown to a height of 5 μm, with vertical offset of 15 μm and horizontal offset of 26.6 μm. Photolithography was employed for microstructure growth, with a polydimethylsiloxane (PDMS) chip-based used as a testbed for diatom adhesion on both platforms. Settlement and adhesion tests were performed using this PDMS microfluidic chip through subjugation to centrifugal force via an in-house developed ‘spin-stand’ which features a motor, in combination with a high-resolution camera, for real-time observing diatom release from PDMS material. Diatom adhesion strength can therefore be determined based on the centrifugal force generated at varying rotational speeds. It is hoped that both the replica and bio-inspired solutions will give comparable anti-fouling results to these synthetic surfaces, whilst also assisting in determining whether anti-fouling solutions should predominantly be investigating either fully bioreplica-based, or a bioinspired, synthetically-based design.Keywords: anti-fouling applications, bio-inspired microstructures, centrifugal microfluidics, surface modification
Procedia PDF Downloads 31720 Optimizing AI Voice for Adolescent Health Education: Preferences and Trustworthiness Across Teens and Parent
Authors: Yu-Lin Chen, Kimberly Koester, Marissa Raymond-Flesh, Anika Thapar, Jay Thapar
Abstract:
Purpose: Effectively communicating adolescent health topics to teens and their parents is crucial. This study emphasizes critically evaluating the optimal use of artificial intelligence tools (AI), which are increasingly prevalent in disseminating health information. By fostering a deeper understanding of AI voice preference in the context of health, the research aspires to have a ripple effect, enhancing the collective health literacy and decision-making capabilities of both teenagers and their parents. This study explores AI voices' potential within health learning modules for annual well-child visits. We aim to identify preferred voice characteristics and understand factors influencing perceived trustworthiness, ultimately aiming to improve health literacy and decision-making in both demographics. Methods: A cross-sectional study assessed preferences and trust perceptions of AI voices in learning modules among teens (11-18) and their parents/guardians in Northern California. The study involved the development of four distinct learning modules covering various adolescent health-related topics, including general communication, sexual and reproductive health communication, parental monitoring, and well-child check-ups. Participants were asked to evaluate eight AI voices across the modules, considering a set of six factors such as intelligibility, naturalness, prosody, social impression, trustworthiness, and overall appeal, using Likert scales ranging from 1 to 10 (the higher, the better). They were also asked to select their preferred choice of voice for each module. Descriptive statistics summarized participant demographics. Chi-square/t-tests explored differences in voice preferences between groups. Regression models identified factors impacting the perceived trustworthiness of the top-selected voice per module. Results: Data from 104 participants (teen=63; adult guardian = 41) were included in the analysis. The mean age is 14.9 for teens (54% male) and 41.9 for the parent/guardian (12% male). At the same time, similar voice quality ratings were observed across groups, and preferences varied by topic. For instance, in general communication, teens leaned towards young female voices, while parents preferred mature female tones. Interestingly, this trend reversed for parental monitoring, with teens favoring mature male voices and parents opting for mature female ones. Both groups, however, converged on mature female voices for sexual and reproductive health topics. Beyond preferences, the study delved into factors influencing perceived trustworthiness. Interestingly, social impression and sound appeal emerged as the most significant contributors across all modules, jointly explaining 71-75% of the variance in trustworthiness ratings. Conclusion: The study emphasizes the importance of catering AI voices to specific audiences and topics. Social impression and sound appeal emerged as critical factors influencing perceived trustworthiness across all modules. These findings highlight the need to tailor AI voices by age and the specific health information being delivered. Ensuring AI voices resonate with both teens and their parents can foster their engagement and trust, ultimately leading to improved health literacy and decision-making for both groups. Limitations and future research: This study lays the groundwork for understanding AI voice preferences for teenagers and their parents in healthcare settings. However, limitations exist. The sample represents a specific geographic location, and cultural variations might influence preferences. Additionally, the modules focused on topics related to well-child visits, and preferences might differ for more sensitive health topics. Future research should explore these limitations and investigate the long-term impact of AI voice on user engagement, health outcomes, and health behaviors.Keywords: artificial intelligence, trustworthiness, voice, adolescent
Procedia PDF Downloads 5419 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 14818 Investigation on Pull-Out-Behavior and Interface Critical Parameters of Polymeric Fibers Embedded in Concrete and Their Correlation with Particular Fiber Characteristics
Authors: Michael Sigruener, Dirk Muscat, Nicole Struebbe
Abstract:
Fiber reinforcement is a state of the art to enhance mechanical properties in plastics. For concrete and civil engineering, steel reinforcements are commonly used. Steel reinforcements show disadvantages in their chemical resistance and weight, whereas polymer fibers' major problems are in fiber-matrix adhesion and mechanical properties. In spite of these facts, longevity and easy handling, as well as chemical resistance motivate researches to develop a polymeric material for fiber reinforced concrete. Adhesion and interfacial mechanism in fiber-polymer-composites are already studied thoroughly. For polymer fibers used as concrete reinforcement, the bonding behavior still requires a deeper investigation. Therefore, several differing polymers (e.g., polypropylene (PP), polyamide 6 (PA6) and polyetheretherketone (PEEK)) were spun into fibers via single screw extrusion and monoaxial stretching. Fibers then were embedded in a concrete matrix, and Single-Fiber-Pull-Out-Tests (SFPT) were conducted to investigate bonding characteristics and microstructural interface of the composite. Differences in maximum pull-out-force, displacement and slope of the linear part of force vs displacement-function, which depicts the adhesion strength and the ductility of the interfacial bond were studied. In SFPT fiber, debonding is an inhomogeneous process, where the combination of interfacial bonding and friction mechanisms add up to a resulting value. Therefore, correlations between polymeric properties and pull-out-mechanisms have to be emphasized. To investigate these correlations, all fibers were introduced to a series of analysis such as differential scanning calorimetry (DSC), contact angle measurement, surface roughness and hardness analysis, tensile testing and scanning electron microscope (SEM). Of each polymer, smooth and abraded fibers were tested, first to simulate the abrasion and damage caused by a concrete mixing process and secondly to estimate the influence of mechanical anchoring of rough surfaces. In general, abraded fibers showed a significant increase in maximum pull-out-force due to better mechanical anchoring. Friction processes therefore play a major role to increase the maximum pull-out-force. The polymer hardness affects the tribological behavior and polymers with high hardness lead to lower surface roughness verified by SEM and surface roughness measurements. This concludes into a decreased maximum pull-out-force for hard polymers. High surface energy polymers show better interfacial bonding strength in general, which coincides with the conducted SFPT investigation. Polymers such as PEEK or PA6 show higher bonding strength in smooth and roughened fibers, revealed through high pull-out-force and concrete particles bonded on the fiber surface pictured via SEM analysis. The surface energy divides into dispersive and polar part, at which the slope is correlating with the polar part. Only polar polymers increase their SFPT-function slope due to better wetting abilities when showing a higher bonding area through rough surfaces. Hence, the maximum force and the bonding strength of an embedded fiber is a function of polarity, hardness, and consequently surface roughness. Other properties such as crystallinity or tensile strength do not affect bonding behavior. Through the conducted analysis, it is now feasible to understand and resolve different effects in pull-out-behavior step-by-step based on the polymer properties itself. This investigation developed a roadmap on how to engineer high adhering polymeric materials for fiber reinforcement of concrete.Keywords: fiber-matrix interface, polymeric fibers, fiber reinforced concrete, single fiber pull-out test
Procedia PDF Downloads 11117 Radioprotective Effects of Super-Paramagnetic Iron Oxide Nanoparticles Used as Magnetic Resonance Imaging Contrast Agent for Magnetic Resonance Imaging-Guided Radiotherapy
Authors: Michael R. Shurin, Galina Shurin, Vladimir A. Kirichenko
Abstract:
Background. Visibility of hepatic malignancies is poor on non-contrast imaging for daily verification of liver malignancies prior to radiation therapy on MRI-guided Linear Accelerators (MR-Linac). Ferumoxytol® (Feraheme, AMAG Pharmaceuticals, Waltham, MA) is a SPION agent that is increasingly utilized off-label as hepatic MRI contrast. This agent has the advantage of providing a functional assessment of the liver based upon its uptake by hepatic Kupffer cells proportionate to vascular perfusion, resulting in strong T1, T2 and T2* relaxation effects and enhanced contrast of malignant tumors, which lack Kupffer cells. The latter characteristic has been recently utilized for MRI-guided radiotherapy planning with precision targeting of liver malignancies. However potential radiotoxicity of SPION has never been addressed for its safe use as an MRI-contrast agent during liver radiotherapy on MRI-Linac. This study defines the radiomodulating properties of SPIONs in vitro on human monocyte and macrophage cell lines exposed to 60Go gamma-rays within clinical radiotherapy dose range. Methods. Human monocyte and macrophages cell line in cultures were loaded with a clinically relevant concentration of Ferumoxytol (30µg/ml) for 2 and 24 h and irradiated to 3Gy, 5Gy and 10Gy. Cells were washed and cultured for additional 24 and 48 h prior to assessing their phenotypic activation by flow cytometry and function, including viability (Annexin V/PI assay), proliferation (MTT assay) and cytokine expression (Luminex assay). Results. Our results reveled that SPION affected both human monocytes and macrophages in vitro. Specifically, iron oxide nanoparticles decreased radiation-induced apoptosis and prevented radiation-induced inhibition of human monocyte proliferative activity. Furthermore, Ferumoxytol protected monocytes from radiation-induced modulation of phenotype. For instance, while irradiation decreased polarization of monocytes to CD11b+CD14+ and CD11bnegCD14neg phenotype, Ferumoxytol prevented these effects. In macrophages, Ferumoxytol counteracted the ability of radiation to up-regulate cell polarization to CD11b+CD14+ phenotype and prevented radiation-induced down-regulation of expression of HLA-DR and CD86 molecules. Finally, Ferumoxytol uptake by human monocytes down-regulated expression of pro-inflammatory chemokines MIP-1α (Macrophage inflammatory protein 1α), MIP-1β (CCL4) and RANTES (CCL5). In macrophages, Ferumoxytol reversed the expression of IL-1RA, IL-8, IP-10 (CXCL10) and TNF-α, and up-regulates expression of MCP-1 (CCL2) and MIP-1α in irradiated macrophages. Conclusion. SPION agent Ferumoxytol increases resistance of human monocytes to radiation-induced cell death in vitro and supports anti-inflammatory phenotype of human macrophages under radiation. The effect is radiation dose-dependent and depends on the duration of Feraheme uptake. This study also finds strong evidence that SPIONs reversed the effect of radiation on the expression of pro-inflammatory cytokines involved in initiation and development of radiation-induced liver damage. Correlative translational work at our institution will directly assess the cyto-protective effects of Ferumoxytol on human Kupfer cells in vitro and ex vivo analysis of explanted liver specimens in a subset of patients receiving Feraheme-enhanced MRI-guided radiotherapy to the primary liver tumors as a bridge to liver transplant.Keywords: superparamagnetic iron oxide nanoparticles, radioprotection, magnetic resonance imaging, liver
Procedia PDF Downloads 7116 The Impact of Supporting Productive Struggle in Learning Mathematics: A Quasi-Experimental Study in High School Algebra Classes
Authors: Sumeyra Karatas, Veysel Karatas, Reyhan Safak, Gamze Bulut-Ozturk, Ozgul Kartal
Abstract:
Productive struggle entails a student's cognitive exertion to comprehend mathematical concepts and uncover solutions not immediately apparent. The significance of productive struggle in learning mathematics is accentuated by influential educational theorists, emphasizing its necessity for learning mathematics with understanding. Consequently, supporting productive struggle in learning mathematics is recognized as a high-leverage and effective mathematics teaching practice. In this study, the investigation into the role of productive struggle in learning mathematics led to the development of a comprehensive rubric for productive struggle pedagogy through an exhaustive literature review. The rubric consists of eight primary criteria and 37 sub-criteria, providing a detailed description of teacher actions and pedagogical choices that foster students' productive struggles. These criteria encompass various pedagogical aspects, including task design, tool implementation, allowing time for struggle, posing questions, scaffolding, handling mistakes, acknowledging efforts, and facilitating discussion/feedback. Utilizing this rubric, a team of researchers and teachers designed eight 90-minute lesson plans, employing a productive struggle pedagogy, for a two-week unit on solving systems of linear equations. Simultaneously, another set of eight lesson plans on the same topic, featuring identical content and problems but employing a traditional lecture-and-practice model, was designed by the same team. The objective was to assess the impact of supporting productive struggle on students' mathematics learning, defined by the strands of mathematical proficiency. This quasi-experimental study compares the control group, which received traditional lecture- and practice instruction, with the treatment group, which experienced a productive struggle in pedagogy. Sixty-six 10th and 11th-grade students from two algebra classes, taught by the same teacher at a high school, underwent either the productive struggle pedagogy or lecture-and-practice approach over two-week eight 90-minute class sessions. To measure students' learning, an assessment was created and validated by a team of researchers and teachers. It comprised seven open-response problems assessing the strands of mathematical proficiency: procedural and conceptual understanding, strategic competence, and adaptive reasoning on the topic. The test was administered at the beginning and end of the two weeks as pre-and post-test. Students' solutions underwent scoring using an established rubric, subjected to expert validation and an inter-rater reliability process involving multiple criteria for each problem based on their steps and procedures. An analysis of covariance (ANCOVA) was conducted to examine the differences between the control group, which received traditional pedagogy, and the treatment group, exposed to the productive struggle pedagogy, on the post-test scores while controlling for the pre-test. The results indicated a significant effect of treatment on post-test scores for procedural understanding (F(2, 63) = 10.47, p < .001), strategic competence (F(2, 63) = 9.92, p < .001), adaptive reasoning (F(2, 63) = 10.69, p < .001), and conceptual understanding (F(2, 63) = 10.06, p < .001), controlling for pre-test scores. This demonstrates the positive impact of supporting productive struggle in learning mathematics. In conclusion, the results revealed the significance of the role of productive struggle in learning mathematics. The study further explored the practical application of productive struggle through the development of a comprehensive rubric describing the pedagogy of supporting productive struggle.Keywords: effective mathematics teaching practice, high school algebra, learning mathematics, productive struggle
Procedia PDF Downloads 5115 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study
Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod
Abstract:
Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive volume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of developed beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB component module) placed at the end of the electron applicator, was done using previously validated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves resulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV electron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junction of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.Keywords: abutting fields, electron beam, radiation therapy, spoilers
Procedia PDF Downloads 17514 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis
Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu
Abstract:
Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.Keywords: GPT, phantom-less QCT, large language model, osteoporosis
Procedia PDF Downloads 7013 Sandstone Petrology of the Kolhan Basin, Eastern India: Implications for the Tectonic Evolution of a Half-Graben
Authors: Rohini Das, Subhasish Das, Smruti Rekha Sahoo, Shagupta Yesmin
Abstract:
The Paleoproterozoic Kolhan Group (Purana) ensemble constitutes the youngest lithostratigraphic 'outlier' in the Singhbhum Archaean craton. The Kolhan unconformably overlies both the Singhbhum granite and the Iron Ore Group (IOG). Representing a typical sandstone-shale ( +/- carbonates) sequence, the Kolhan is characterized by the development of thin and discontinuous patches of basal conglomerates draped by sandstone beds. The IOG-fault limits the western 'distal' margin of the Kolhan basin showing evidence of passive subsidence subsequent to the initial rifting stage. The basin evolved as a half-graben under the influence of an extensional stress regime. The assumption of a tectonic setting for the NE-SW trending Kolhan basin possibly relates to the basin opening to the E-W extensional stress system that prevailed during the development of the Newer Dolerite dyke. The Paleoproterozoic age of the Kolhan basin is based on the consideration of the conformable stress pattern responsible both for the basin opening and the development of the conjugate fracture system along which the Newer Dolerite dykes intruded the Singhbhum Archaean craton. The Kolhan sandstones show progressive change towards greater textural and mineralogical maturity in its upbuilding. The trend of variations in different mineralogical and textural attributes, however, exhibits inflections at different lithological levels. Petrological studies collectively indicate that the sandstones were dominantly derived from a weathered granitic crust under a humid climatic condition. Provenance-derived variations in sandstone compositions are therefore a key in unraveling regional tectonic histories. The basin axis controlled the progradation direction which was likely driven by climatically induced sediment influx, a eustatic fall, or both. In the case of the incongruent shift, increased sediment supply permitted the rivers to cross the basinal deep. Temporal association of the Kolhan with tectonic structures in the belt indicates that syn-tectonic thrust uplift, not isostatic uplift or climate, caused the influx of quartz. The sedimentation pattern in the Kolhan reflects a change from braided fluvial-ephemeral pattern to a fan-delta-lacustrine type. The channel geometries and the climate exerted a major control on the processes of sediment transfer. Repeated fault controlled uplift of the source followed by subsidence and forced regression, generated multiple sediment cyclicity that led to the fluvial-fan delta sedimentation pattern. Intermittent uplift of the faulted blocks exposed fresh bedrock to mechanical weathering that generated a large amount of detritus and resulted to forced regressions, repeatedly disrupting the cycles which may reflect a stratigraphic response of connected rift basins at the early stage of extension. The marked variations in the thickness of the fan delta succession and the stacking pattern in different measured profiles reflect the overriding tectonic controls on fan delta evolution. The accumulated fault displacement created higher accommodation and thicker delta sequences. Intermittent uplift of fault blocks exposed fresh bedrock to mechanical weathering, generated a large amount of detritus, and resulted in forced closure of the land-locked basin, repeatedly disrupting the fining upward pattern. The control of source rock lithology or climate was of secondary importance to tectonic effects. Such a retrograding fan delta could be a stratigraphic response of connected rift basins at the early stage of extension.Keywords: Kolhan basin, petrology, sandstone, tectonics
Procedia PDF Downloads 50112 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District
Authors: Veenapani Rajeev Verma
Abstract:
Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.Keywords: effective coverage, principal component analysis, readiness index, universal health coverage
Procedia PDF Downloads 12011 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India
Authors: Teesta Dey
Abstract:
Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development
Procedia PDF Downloads 27210 From Linear to Circular Model: An Artificial Intelligence-Powered Approach in Fosso Imperatore
Authors: Carlotta D’Alessandro, Giuseppe Ioppolo, Katarzyna Szopik-Depczyńska
Abstract:
— The growing scarcity of resources and the mounting pressures of climate change, water pollution, and chemical contamination have prompted societies, governments, and businesses to seek ways to minimize their environmental impact. To combat climate change, and foster sustainability, Industrial Symbiosis (IS) offers a powerful approach, facilitating the shift toward a circular economic model. IS has gained prominence in the European Union's policy framework as crucial enabler of resource efficiency and circular economy practices. The essence of IS lies in the collaborative sharing of resources such as energy, material by-products, waste, and water, thanks to geographic proximity. It can be exemplified by eco-industrial parks (EIPs), which are natural environments for boosting cooperation and resource sharing between businesses. EIPs are characterized by group of businesses situated in proximity, connected by a network of both cooperative and competitive interactions. They represent a sustainable industrial model aimed at reducing resource use, waste, and environmental impact while fostering economic and social wellbeing. IS, combined with Artificial Intelligence (AI)-driven technologies, can further optimize resource sharing and efficiency within EIPs. This research, supported by the “CE_IPs” project, aims to analyze the potential for IS and AI, in advancing circularity and sustainability at Fosso Imperatore. The Fosso Imperatore Industrial Park in Nocera Inferiore, Italy, specializes in agriculture and the industrial transformation of agricultural products, particularly tomatoes, tobacco, and textile fibers. This unique industrial cluster, centered around tomato cultivation and processing, also includes mechanical engineering enterprises and agricultural packaging firms. To stimulate the shift from a traditional to a circular economic model, an AI-powered Local Development Plan (LDP) is developed for Fosso Imperatore. It can leverage data analytics, predictive modeling, and stakeholder engagement to optimize resource utilization, reduce waste, and promote sustainable industrial practices. A comprehensive SWOT analysis of the AI-powered LDP revealed several key factors influencing its potential success and challenges. Among the notable strengths and opportunities arising from AI implementation are reduced processing times, fewer human errors, and increased revenue generation. Furthermore, predictive analytics minimize downtime, bolster productivity, and elevate quality while mitigating workplace hazards. However, the integration of AI also presents potential weaknesses and threats, including significant financial investment, since implementing and maintaining AI systems can be costly. The widespread adoption of AI could lead to job losses in certain sectors. Lastly, AI systems are susceptible to cyberattacks, posing risks to data security and operational continuity. Moreover, an Analytic Hierarchy Process (AHP) analysis was employed to yield a prioritized ranking of the outlined AI-driven LDP practices based on the stakeholder input, ensuring a more comprehensive and representative understanding of their relative significance for achieving sustainability in Fosso Imperatore Industrial Park. While this study provides valuable insights into the potential of AIpowered LDP at the Fosso Imperatore, it is important to note that the findings may not be directly applicable to all industrial parks, particularly those with different sizes, geographic locations, or industry compositions. Additional study is necessary to scrutinize the generalizability of these results and to identify best practices for implementing AI-driven LDP in diverse contexts.Keywords: artificial intelligence, climate change, Fosso Imperatore, industrial park, industrial symbiosis
Procedia PDF Downloads 23