Search results for: optimal binary linear codes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7093

Search results for: optimal binary linear codes

43 Application of Aerogeomagnetic and Ground Magnetic Surveys for Deep-Seated Kimberlite Pipes in Central India

Authors: Utkarsh Tripathi, Bikalp C. Mandal, Ravi Kumar Umrao, Sirsha Das, M. K. Bhowmic, Joyesh Bagchi, Hemant Kumar

Abstract:

The Central India Diamond Province (CIDP) is known for the occurrences of primary and secondary sources for diamonds from the Vindhyan platformal sediments, which host several kimberlites, with one operating mine. The known kimberlites are Neo-Proterozoic in age and intrude into the Kaimur Group of rocks. Based on the interpretation of areo-geomagnetic data, three potential zones were demarcated in parts of Chitrakoot and Banda districts, Uttar Pradesh, and Satna district, Madhya Pradesh, India. To validate the aero-geomagnetic interpretation, ground magnetic coupled with a gravity survey was conducted to validate the anomaly and explore the possibility of some pipes concealed beneath the Vindhyan sedimentary cover. Geologically the area exposes the milky white to buff-colored arkosic and arenitic sandstone belonging to the Dhandraul Formation of the Kaimur Group, which are undeformed and unmetamorphosed providing almost transparent media for geophysical exploration. There is neither surface nor any geophysical indication of intersections of linear structures, but the joint patterns depict three principal joints along NNE-SSW, ENE-WSW, and NW-SE directions with vertical to sub-vertical dips. Aeromagnetic data interpretation brings out three promising zones with the bi-polar magnetic anomaly (69-602nT) that represent potential kimberlite intrusive concealed below at an approximate depth of 150-170m. The ground magnetic survey has brought out the above-mentioned anomalies in zone-I, which is congruent with the available aero-geophysical data. The magnetic anomaly map shows a total variation of 741 nT over the area. Two very high magnetic zones (H1 and H2) have been observed with around 500 nT and 400 nT magnitudes, respectively. Anomaly zone H1 is located in the west-central part of the area, south of Madulihai village, while anomaly zone H2 is located 2km apart in the north-eastern direction. The Euler 3D solution map indicates the possible existence of the ultramafic body in both the magnetic highs (H1 and H2). The H2 high shows the shallow depth, and H1 shows a deeper depth solution. In the reduced-to-pole (RTP) method, the bipolar anomaly disappears and indicates the existence of one causative source for both anomalies, which is, in all probabilities, an ultramafic suite of rock. The H1 magnetic high represents the main body, which persists up to depths of ~500m, as depicted through the upward continuation derivative map. Radially Averaged Power Spectrum (RAPS) shows the thickness of loose sediments up to 25m with a cumulative depth of 154m for sandstone overlying the ultramafic body. The average depth range of the shallower body (H2) is 60.5-86 meters, as estimated through the Peters half slope method. Magnetic (TF) anomaly with BA contour also shows high BA value around the high zones of magnetic anomaly (H1 and H2), which suggests that the causative body is with higher density and susceptibility for the surrounding host rock. The ground magnetic survey coupled with the gravity confirms a potential target for further exploration as the findings are co-relatable with the presence of the known diamondiferous kimberlites in this region, which post-date the rocks of the Kaimur Group.

Keywords: Kaimur, kimberlite, Euler 3D solution, magnetic

Procedia PDF Downloads 44
42 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.

Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence

Procedia PDF Downloads 80
41 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 101
40 The Semiotics of Soft Power; An Examination of the South Korean Entertainment Industry

Authors: Enya Trenholm-Jensen

Abstract:

This paper employs various semiotic methodologies to examine the mechanism of soft power. Soft power refers to a country’s global reputation and their ability to leverage that reputation to achieve certain aims. South Korea has invested heavily in their soft power strategy for a multitude of predominantly historical and geopolitical reasons. On account of this investment and the global prominence of their strategy, South Korea was considered to be the optimal candidate for the aims of this investigation. Having isolated the entertainment industry as one of the most heavily funded segments of the South Korean soft power strategy, the analysis restricted itself to this sector. Within this industry, two entertainment products were selected as case studies. The case studies were chosen based on commercial success according to metrics such as streams, purchases, and subsequent revenue. This criterion was deemed to be the most objective and verifiable indicator of the products general appeal. The entertainment products which met the chosen criterion were Netflix’ “Squid Game” and BTS’ hit single “Butter”. The methodologies employed were chosen according to the medium of the entertainment products. For “Squid Game,” an aesthetic analysis was carried out to investigate how multi- layered meanings were mobilized in a show popularized by its visual grammar. To examine “Butter”, both music semiology and linguistic analysis were employed. The music section featured an analysis underpinned by denotative and connotative music semiotic theories borrowing from scholars Theo van Leeuwen and Martin Irvine. The linguistic analysis focused on stance and semantic fields according to scholarship by George Yule and John W. DuBois. The aesthetic analysis of the first case study revealed intertextual references to famous artworks, which served to augment the emotional provocation of the Squid Game narrative. For the second case study, the findings exposed a set of musical meaning units arranged in a patchwork of familiar and futuristic elements to achieve a song that existed on the boundary between old and new. The linguistic analysis of the song’s lyrics found a deceptively innocuous surface level meaning that bore implications for authority, intimacy, and commercial success. Whether through means of visual metaphor, embedded auditory associations, or linguistic subtext, the collective findings of the three analyses exhibited a desire to conjure a form of positive arousal in the spectator. In the synthesis section, this process is likened to that of branding. Through an exploration of branding, the entertainment products can be understood as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept. Limitations in the form of a timeframe biased perspective are addressed, and directions for future research are suggested. This paper employs semiotic methodologies to examine two entertainment products as mechanisms of soft power. Through means of visual metaphor, embedded auditory associations, or linguistic subtext, the findings reveal a desire to conjure positive arousal in the spectator. The synthesis finds similarities to branding, thus positioning the entertainment products as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept.

Keywords: BTS, cognitive semiotics, entertainment, soft power, south korea, squid game

Procedia PDF Downloads 124
39 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 144
38 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 141
37 Planning Railway Assets Renewal with a Multiobjective Approach

Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida

Abstract:

Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.

Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling

Procedia PDF Downloads 124
36 A Multiple Freezing/Thawing Cycles Influence Internal Structure and Mechanical Properties of Achilles Tendon

Authors: Martyna Ekiert, Natalia Grzechnik, Joanna Karbowniczek, Urszula Stachewicz, Andrzej Mlyniec

Abstract:

Tendon grafting is a common procedure performed to treat tendon rupture. Before the surgical procedure, tissues intended for grafts (i.e., Achilles tendon) are stored in ultra-low temperatures for a long time and also may be subjected to unfavorable conditions, such as repetitive freezing (F) and thawing (T). Such storage protocols may highly influence the graft mechanical properties, decrease its functionality and thus increase the risk of complications during the transplant procedure. The literature reports on the influence of multiple F/T cycles on internal structure and mechanical properties of tendons stay inconclusive, confirming and denying the negative influence of multiple F/T at the same time. An inconsistent research methodology and lack of clear limit of F/T cycles, which disqualifies tissue for surgical graft purposes, encouraged us to investigate the issue of multiple F/T cycles by the mean of biomechanical tensile tests supported with Scanning Electron Microscope (SEM) imaging. The study was conducted on male bovine Achilles tendon-derived from the local abattoir. Fresh tendons were cleaned of excessive membranes and then sectioned to obtained fascicle bundles. Collected samples were randomly assigned to 6 groups subjected to 1, 2, 4, 6, 8 and 12 cycles of freezing-thawing (F/T), respectively. Each F/T cycle included deep freezing at -80°C temperature, followed by thawing at room temperature. After final thawing, thin slices of the side part of samples subjected to 1, 4, 8 and 12 F/T cycles were collected for SEM imaging. Then, the width and thickness of all samples were measured to calculate the cross-sectional area. Biomechanical tests were performed using the universal testing machine (model Instron 8872, INSTRON®, Norwood, Massachusetts, USA) using a load cell with a maximum capacity of 250 kN and standard atmospheric conditions. Both ends of each fascicle bundle were manually clamped in grasping clamps using abrasive paper and wet cellulose wadding swabs to prevent tissue slipping while clamping and testing. Samples were subjected to the testing procedure including pre-loading, pre-cycling, loading, holding and unloading steps to obtain stress-strain curves for representing tendon stretching and relaxation. The stiffness of AT fascicles bundle samples was evaluated in terms of modulus of elasticity (Young’s modulus), calculated from the slope of the linear region of stress-strain curves. SEM imaging was preceded by chemical sample preparation including 24hr fixation in 3% glutaraldehyde buffered with 0.1 M phosphate buffer, washing with 0.1 M phosphate buffer solution and dehydration in a graded ethanol solution. SEM images (Merlin Gemini II microscope, ZEISS®) were taken using 30 000x mag, which allowed measuring a diameter of collagen fibrils. The results confirm a decrease in fascicle bundles Young’s modulus as well as a decrease in the diameter of collagen fibrils. These results confirm the negative influence of multiple F/T cycles on the mechanical properties of tendon tissue.

Keywords: biomechanics, collagen, fascicle bundles, soft tissue

Procedia PDF Downloads 102
35 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality

Authors: Peregrine James Dalziel, Philip Vu Tran

Abstract:

Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.

Keywords: workflow, quality, administration, CT, staffing

Procedia PDF Downloads 89
34 Optimal Pressure Control and Burst Detection for Sustainable Water Management

Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana

Abstract:

Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.

Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring

Procedia PDF Downloads 52
33 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 256
32 Burkholderia Cepacia ST 767 Causing a Three Years Nosocomial Outbreak in a Hemodialysis Unit

Authors: Gousilin Leandra Rocha Da Silva, Stéfani T. A. Dantas, Bruna F. Rossi, Erika R. Bonsaglia, Ivana G. Castilho, Terue Sadatsune, Ary Fernandes Júnior, Vera l. M. Rall

Abstract:

Kidney failure causes decreased diuresis and accumulation of nitrogenous substances in the body. To increase patient survival, hemodialysis is used as a partial substitute for renal function. However, contamination of the water used in this treatment, causing bacteremia in patients, is a worldwide concern. The Burkholderia cepacia complex (Bcc), a group of bacteria with more than 20 species, is frequently isolated from hemodialysis water samples and comprises opportunistic bacteria, affecting immunosuppressed patients, due to its wide variety of virulence factors, in addition to innate resistance to several antimicrobial agents, contributing to the permanence in the hospital environment and to the pathogenesis in the host. The objective of the present work was to characterize molecularly and phenotypically Bcc isolates collected from the water and dialysate of the Hemodialysis Unit and from the blood of patients at a Public Hospital in Botucatu, São Paulo, Brazil, between 2019 and 2021. We used 33 Bcc isolates, previously obtained from blood cultures from patients with bacteremia undergoing hemodialysis treatment (2019-2021) and 24 isolates obtained from water and dialysate samples in a Hemodialysis Unit (same period). The recA gene was sequenced to identify the specific species among the Bcc group. All isolates were tested for the presence of some genes that encode virulence factors such as cblA, esmR, zmpA and zmpB. Considering the epidemiology of the outbreak, the Bcc isolates were molecularly characterized by Multi Locus Sequence Type (MLST) and by pulsed-field gel electrophoresis (PFGE). The verification and quantification of biofilm in a polystyrene microplate were performed by submitting the isolates to different incubation temperatures (20°C, average water temperature and 35°C, optimal temperature for group growth). The antibiogram was performed with disc diffusion tests on agar, using discs impregnated with cefepime (30µg), ceftazidime (30µg), ciprofloxacin (5µg), gentamicin (10µg), imipenem (10µg), amikacin 30µg), sulfametazol/trimethoprim (23.75/1.25µg) and ampicillin/sulbactam (10/10µg). The presence of ZmpB was identified in all isolates, while ZmpA was observed in 96.5% of the isolates, while none of them presented the cblA and esmR genes. The antibiogram of the 33 human isolates indicated that all were resistant to gentamicin, colistin, ampicillin/sulbactam and imipenem. 16 (48.5%) isolates were resistant to amikacin and lower rates of resistance were observed for meropenem, ceftazidime, cefepime, ciprofloxacin and piperacycline/tazobactam (6.1%). All isolates were sensitive to sulfametazol/trimethoprim, levofloxacin and tigecycline. As for the water isolates, resistance was observed only to gentamicin (34.8%) and imipenem (17.4%). According to PFGE results, all isolates obtained from humans and water belonged to the same pulsotype (1), which was identified by recA sequencing as B. cepacia¸, belonging to sequence type ST-767. By observing a single pulse type over three years, one can observe the persistence of this isolate in the pipeline, contaminating patients undergoing hemodialysis, despite the routine disinfection of water with peracetic acid. This persistence is probably due to the production of biofilm, which protects bacteria from disinfectants and, making this scenario more critical, several isolates proved to be multidrug-resistant (resistance to at least three groups of antimicrobials), turning the patient care even more difficult.

Keywords: hemodialysis, burkholderia cepacia, PFGE, MLST, multi drug resistance

Procedia PDF Downloads 68
31 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 72
30 Biostabilisation of Sediments for the Protection of Marine Infrastructure from Scour

Authors: Rob Schindler

Abstract:

Industry-standard methods of mitigating erosion of seabed sediments rely on ‘hard engineering’ approaches which have numerous environmental shortcomings: (1) direct loss of habitat by smothering of benthic species, (2) disruption of sediment transport processes, damaging geomorphic and ecosystem functionality (3) generation of secondary erosion problems, (4) introduction of material that may propagate non-local species, and (5) provision of pathways for the spread of invasive species. Recent studies have also revealed the importance of biological cohesion, the result of naturally occurring extra-cellular polymeric substances (EPS), in stabilizing natural sediments. Mimicking the strong bonding kinetics through the deliberate addition of EPS to sediments – henceforth termed ‘biostabilisation’ - offers a means in which to mitigate against erosion induced by structures or episodic increases in hydrodynamic forcing (e.g. storms and floods) whilst avoiding, or reducing, hard engineering. Here we present unique experiments that systematically examine how biostabilisation reduces scour around a monopile in a current, a first step to realizing the potential of this new method of scouring reduction for a wide range of engineering purposes in aquatic substrates. Experiments were performed in Plymouth University’s recirculating sediment flume which includes a recessed scour pit. The model monopile was 0.048 m in diameter, D. Assuming a prototype monopile diameter of 2.0 m yields a geometric ratio of 41.67. When applied to a 10 m prototype water depth this yields a model depth, d, of 0.24 m. The sediment pit containing the monopile was filled with different biostabilised substrata prepared using a mixture of fine sand (D50 = 230 μm) and EPS (Xanthan gum). Nine sand-EPS mixtures were examined spanning EPS contents of 0.0% < b0 < 0.50%. Scour development was measured using a laser point gauge along a 530 mm centreline at 10 mm increments at regular periods over 5 h. Maximum scour depth and excavated area were determined at different time steps and plotted against time to yield equilibrium values. After 5 hours the current was stopped and a detailed scan of the final scour morphology was taken. Results show that increasing EPS content causes a progressive reduction in the equilibrium depth and lateral extent of scour, and hence excavated material. Very small amounts equating to natural communities (< 0.1% by mass) reduce scour rate, depth and extent of scour around monopiles. Furthermore, the strong linear relationships between EPS content, equilibrium scour depth, excavation area and timescales of scouring offer a simple index on which to modify existing scour prediction methods. We conclude that the biostabilisation of sediments with EPS may offer a simple, cost-effective and ecologically sensitive means of reducing scour in a range of contexts including OWFs, bridge piers, pipeline installation, and void filling in rock armour. Biostabilisation may also reduce economic costs through (1) Use of existing site sediments, or waste dredged sediments (2) Reduced fabrication of materials, (3) Lower transport costs, (4) Less dependence on specialist vessels and precise sub-sea assembly. Further, its potential environmental credentials may allow sensitive use of the seabed in marine protection zones across the globe.

Keywords: biostabilisation, EPS, marine, scour

Procedia PDF Downloads 147
29 Understanding Systemic Barriers (and Opportunities) to Increasing Uptake of Subcutaneous Medroxy Progesterone Acetate Self-Injection in Health Facilities in Nigeria

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background: The DISC project collaborated with partners to implement demand creation and service delivery interventions, including the MoT (Moment of Truth) innovation, in over 500 health facilities across 15 states. This has increased the voluntary conversion rate to self-injection among women who opt for injectable contraception. While some facilities recorded an increasing trend in key performance indicators, few others persistently performed sub-optimally due to provider and system-related barriers. Methodology: Twenty-two facilities performing sub-optimally were selected purposively from three Nigerian states. Low productivity was appraised using low reporting rates and poor SI conversion rates as indicators. Interviews were conducted with health providers across these health facilities using a rapid diagnosis tool. The project also conducted a data quality assessment that evaluated the veracity of data elements reported across the three major sources of family planning data in the facility. Findings: The inability and sometimes refusal of providers to support clients to self-inject effectively was associated with the misunderstanding of its value to their work experience. It was also observed that providers still held a strong influence over clients’ method choices. Furthermore, providers held biases and misconceptions about DMPA-SC that restricted the access of obese clients and new acceptors to services – a clear departure from the recommendations of the national guidelines. Additionally, quality of care standards was compromised because job aids were not used to inform service delivery. Facilities performing sub-optimally often under-reported DMPA-SC utilization data, and there were multiple uncoordinated responsibilities for recording and reporting. Additionally, data validation meetings were not regularly convened, and these meetings were ineffective in authenticating data received from health facilities. Other reasons for sub-optimal performance included poor documentation and tracking of stock inventory resulting in commodity stockouts, low client flow because of poor positioning of health facilities, and ineffective messaging. Some facilities lacked adequate human and material resources to provide services effectively and received very few supportive supervision visits. Supportive supervision visits and Data Quality Audits have been useful to address the aforementioned performance barriers. The project has deployed digital DMPA-SC self-injection checklists that have been aligned with nationally approved templates. During visits, each provider and community mobilizer is accorded special attention by the supervisor until he/she can perform procedures in line with best practice (protocol). Conclusion: This narrative provides a summary of a range of factors that identify health facilities performing sub-optimally in their provision of DMPA-SC services. Findings from this assessment will be useful during project design to inform effective strategies. As the project enters its final stages of implementation, it is transitioning high-impact activities to state institutions in the quest to sustain the quality of service beyond the tenure of the project. The project has flagged activities, as well as created protocols and tools aimed at placing state-level stakeholders at the forefront of improving productivity in health facilities.

Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, barriers, opportunities, performance

Procedia PDF Downloads 53
28 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing

Authors: Kashima Arora, Monika Tomar, Vinay Gupta

Abstract:

Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.

Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film

Procedia PDF Downloads 151
27 The Impact of Kids Science Labs Intervention Program on Independent Thinking and Academic Achievement in Young Children

Authors: Aliya Kamilyevna Salahova

Abstract:

This study examines the effectiveness of the Kids Science Labs intervention program, based on STEM, in fostering independent thinking among preschool and elementary school children and its influence on their academic achievement. Through a comprehensive methodology involving interviews, surveys, observations, case studies, and statistical tests, data were collected from various sources to accurately analyze the program's effects. The findings indicate a significant positive impact on children's independent thinking abilities, leading to improved academic performance in mathematics and science, enhanced learning motivation, and a propensity to critically evaluate problem-solving approaches. This research contributes to the theoretical understanding of how STEM activities can foster independent thinking and academic success in young children, providing valuable insights for the development of educational programs. Introduction: The goal of this study is to investigate the influence of the Kids Science Labs intervention program, grounded in STEM, on the development of independent thinking skills among preschool and elementary school children. By addressing this objective, we aim to explore the program's potential to enhance academic performance in mathematics and science. The study's findings have theoretical significance as they shed light on the ways in which STEM activities can foster independent thinking in young children, thus enabling educators to design effective learning programs that promote academic success. Methodology: This study employs a robust methodology that includes interviews, surveys, observations, case studies, and statistical tests. These methods were carefully selected to collect comprehensive data from multiple sources, such as documents and records, ensuring a thorough analysis of the program's effects. The use of diverse data collection and analysis procedures facilitated an in-depth exploration of the research questions and yielded reliable results. Results: The results indicate that children participating in the Kids Science Labs program experienced a sustained positive impact on their independent thinking abilities. Moreover, these children demonstrated improved academic performance in mathematics and science, displaying higher learning motivation and the capacity to critically evaluate problem-solving methods and seek optimal solutions. Theoretical Importance: This study contributes significantly to the existing theoretical knowledge by elucidating how STEM activities can foster independent thinking and enhance academic success in preschool and elementary school children. The findings have practical implications for educators, empowering them to develop learning programs that stimulate independent thinking, leading to improved academic performance in young children. Discussion: The findings of this research affirm that the Kids Science Labs intervention program is highly effective in fostering independent thinking among preschool and elementary school children. The program's positive impact extends to improved academic performance in mathematics and science, highlighting its potential to enhance learning outcomes. Educators can leverage these findings to develop educational programs that promote independent thinking and elevate academic achievement in young children. Conclusion: In conclusion, the Kids Science Labs intervention program has been found to be highly effective in fostering independent thinking among preschool and elementary school children. Furthermore, participation in the program correlates with improved academic performance in mathematics and science. The study's outcomes underscore the importance of developing educational initiatives that stimulate independent thinking in young children, thereby enhancing their academic success.

Keywords: STEM in preschool, STEM in elementary school, kids science labs, independent thinking, STEM activities in early childhood education

Procedia PDF Downloads 63
26 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints

Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes

Abstract:

Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.

Keywords: circular bioeconomy, interior region of portugal, regional development., public policy

Procedia PDF Downloads 64
25 A Quantitative Case Study Analysis of Store Format Contributors to U.S. County Obesity Prevalence in Virginia

Authors: Bailey Houghtaling, Sarah Misyak

Abstract:

Food access; the availability, affordability, convenience, and desirability of food and beverage products within communities, is influential on consumers’ purchasing and consumption decisions. These variables may contribute to lower dietary quality scores and a higher obesity prevalence documented among rural and disadvantaged populations in the United States (U.S.). Current research assessing linkages between food access and obesity outcomes has primarily focused on distance to a traditional grocery/supermarket store as a measure of optimality. However, low-income consumers especially, including U.S. Department of Agriculture’s Supplemental Nutrition Assistance Program (SNAP) participants, seem to utilize non-traditional food store formats with greater frequency for household dietary needs. Non-traditional formats have been associated with less nutritious food and beverage options and consumer purchases that are high in saturated fats, added sugars, and sodium. Authors’ formative research indicated differences by U.S. region and rurality in the distribution of traditional and non-traditional SNAP-authorized food store formats. Therefore, using Virginia as a case study, the purpose of this research was to determine if a relationship between store format, rurality, and obesity exists. This research applied SNAP-authorized food store data (food access points for SNAP as well as non-SNAP consumers) and obesity prevalence data by Virginia county using publicly available databases: (1) SNAP Retailer Locator, and; (2) U.S. County Health Rankings. The alpha level was set a priori at 0.05. All Virginia SNAP-authorized stores (n=6,461) were coded by format – grocery, drug, mass merchandiser, club, convenience, dollar, supercenter, specialty, farmers market, independent grocer, and non-food store. Simple linear regression was applied primarily to assess the relationship between store format and obesity. Thereafter, multiple variables were added to the regression to account for potential moderating relationships (e.g., county income, rurality). Convenience, dollar, non-food or restaurant, mass merchandiser, farmers market, and independent grocer formats were significantly, positively related to obesity prevalence. Upon controlling for urban-rural status and income, results indicated the following formats to be significantly related to county obesity prevalence with a small, positive effect: convenience (p=0.010), accounting for 0.3% of the variance in obesity prevalence; dollar (p=0.005; 0.5% of the variance), and; non-food (p=0.030; 1.3% of the variance) formats. These results align with current literature on consumer behavior at non-traditional formats. For example, consumers’ food and beverage purchases at convenience and dollar stores are documented to be high in saturated fats, added sugars, and sodium. Further, non-food stores (i.e., quick-serve restaurants) often contribute to a large portion of U.S. consumers’ dietary intake and thus poor dietary quality scores. Current food access research investigates grocery/supermarket access and obesity outcomes. These results suggest more research is needed that focuses on non-traditional food store formats. Nutrition interventions within convenience, dollar, and non-food stores, for example, that aim to enhance not only healthy food access but the affordability, convenience, and desirability of nutritious food and beverage options may impact obesity rates in Virginia. More research is warranted utilizing the presented investigative framework in other U.S. and global regions to explore the role and the potential of non-traditional food store formats to prevent and reduce obesity.

Keywords: food access, food store format, non-traditional food stores, obesity prevalence

Procedia PDF Downloads 110
24 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems

Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana

Abstract:

The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.

Keywords: mitigations, risk, transport, security, vulnerabilities

Procedia PDF Downloads 133
23 The Pro-Reparative Effect of Vasoactive Intestinal Peptide in Chronic Inflammatory Osteolytic Periapical Lesions

Authors: Michelle C. S. Azevedo, Priscila M. Colavite, Carolina F. Francisconi, Ana P. Trombone, Gustavo P. Garlet

Abstract:

VIP (vasoactive intestinal peptide) know as a potential protective factor in the view of its marked immunosuppressive properties. In this work, we investigated a possible association of VIP with the clinical status of experimental periapical granulomas and the association with expression markers in the lesions potentially associated with periapical lesions pathogenesis. C57BL/6WT mice were treated or not with recombinant VIP. Animals with active/progressive (N=40), inactive/stable (N=70) periapical granulomas and controls (N=50) were anesthetized and the right mandibular first molar was surgically opened, allowing exposure of dental pulp. Endodontic pathogenic bacterial strains were inoculated: Porphyromonas gingivalis, Prevotella nigrescens, Actinomyces viscosus, and Fusobacterium nucleatum subsp. polymorphum. The cavity was not sealed after bacterial inoculation. During lesion development, animals were treated or not with recombinant VIP 3 days post infection. Animals were killed after 3, 7, 14, and 21 days of infection and the jaws were dissected. The extraction of total RNA from periodontal tissues was performed and the integrity of samples was checked. qPCR reaction using TaqMan chemistry with inventoried primers were performed in ViiA7 equipment. The results, depicted as the relative levels of gene expression, were calculated in reference to GAPDH and β-actin expression. Periodontal tissues from upper molars were vested and incubated supplemented RPMI, followed by processing with 0.05% DNase. Cell viability and couting were determined by Neubauer chamber analysis. For flow cytometry analysis, after cell counting the cells were stained with the optimal dilution of each antibody; (PE)-conjugated and (FITC)-conjugated antibodies against CD4, CD25, FOXP3, IL-4, IL-17 and IFN-γ antibodies, as well their respective isotype controls. Cells were analyzed by FACScan and CellQuest software. Results are presented as the number of cells in the periodontal tissues or the number of positive cells for each marker in the CD4+FOXp3+, CD4+IL-4+, CD4+IFNg+ and CD4+IL-17+ subpopulations. The levels mRNA were measured by qPCR. The VIP expression was predominated in inactive lesions, as well part of the clusters of cytokine/Th markers identified as protective factors and a negative correlation between VIP expression and lesion evolution was observed. A quantitative analysis of IL1β, IL17, TNF, IFN, MMP2, RANKL, OPG, IL10, TGFβ, CTLA4, COL5A1, CTGF, CXCL11, FGF7, ITGA4, ITGA5, SERP1 and VTN expression was measured in experimental periapical lesions treated with VIP 7 and 14 days after lesion induction and healthy animals. After 7 days, all targets presented a significate increase in comparison to untreated animals. About migration kinetics, profile of chemokine receptors expression of TCD4+ subsets and phenotypic analysis of Tregs, Th1, Th2 and Th17 cells during the course of experimental periodontal disease evaluated by flow cytometry and depicted as the number of positive cells for each marker. CD4+IFNg+ and CD4+FOXp3+ cells migration were significate increased 7 days post VIP treatment. CD4+IL17+ cells migration were significate increased 7 and 14 days post VIP treatment, CD4+IL4+ cells migration were significate increased 14 and 21 days post VIP treatment compared to the control group. In conclusion, our experimental data support VIP involvement in determining the inactivity of periapical lesions. Financial support: FAPESP #2015/25618-2.

Keywords: chronic inflammation, cytokines, osteolytic lesions, VIP (Vasoactive Intestinal Peptide)

Procedia PDF Downloads 166
22 Carbon Nanotube-Based Catalyst Modification to Improve Proton Exchange Membrane Fuel Cell Interlayer Interactions

Authors: Ling Ai, Ziyu Zhao, Zeyu Zhou, Xiaochen Yang, Heng Zhai, Stuart Holmes

Abstract:

Optimizing the catalyst layer structure is crucial for enhancing the performance of proton exchange membrane fuel cells (PEMFCs) with low Platinum (Pt) loading. Current works focused on the utilization, durability, and site activity of Pt particles on support, and performance enhancement has been achieved by loading Pt onto porous support with different morphology, such as graphene, carbon fiber, and carbon black. Some schemes have also incorporated cost considerations to achieve lower Pt loading. However, the design of the catalyst layer (CL) structure in the membrane electrode assembly (MEA) must consider the interactions between the layers. Addressing the crucial aspects of water management, low contact resistance, and the establishment of effective three-phase boundary for MEA, multi-walled carbon nanotubes (MWCNTs) are promising CL support due to their intrinsically high hydrophobicity, high axial electrical conductivity, and potential for ordered alignment. However, the drawbacks of MWCNTs, such as strong agglomeration, wall surface chemical inertness, and unopened ends, are unfavorable for Pt nanoparticle loading, which is detrimental to MEA processing and leads to inhomogeneous CL surfaces. This further deteriorates the utilization of Pt and increases the contact resistance. Robust chemical oxidation or nitrogen doping can introduce polar functional groups onto the surface of MWCNTs, facilitating the creation of open tube ends and inducing defects in tube walls. This improves dispersibility and load capacity but reduces length and conductivity. Consequently, a trade-off exists between maintaining the intrinsic properties and the degree of functionalization of MWCNTs. In this work, MWCNTs were modified based on the operational requirements of the MEA from the viewpoint of interlayer interactions, including the search for the optimal degree of oxidation, N-doping, and micro-arrangement. MWCNT were functionalized by oxidizing, N-doping, as well as micro-alignment to achieve lower contact resistance between CL and proton exchange membrane (PEM), better hydrophobicity, and enhanced performance. Furthermore, this work expects to construct a more continuously distributed three-phase boundary by aligning MWCNT to form a locally ordered structure, which is essential for the efficient utilization of Pt active sites. Different from other chemical oxidation schemes that used HNO3:H2SO4 (1:3) mixed acid to strongly oxidize MWCNT, this scheme adopted pure HNO3 to partially oxidize MWCNT at a lower reflux temperature (80 ℃) and a shorter treatment time (0 to 10 h) to preserve the morphology and intrinsic conductivity of MWCNT. The maximum power density of 979.81 mw cm-2 was achieved by Pt loading on 6h MWCNT oxidation time (Pt-MWCNT6h). This represented a 59.53% improvement over the commercial Pt/C catalyst of 614.17 (mw cm-2). In addition, due to the stronger electrical conductivity, the charge transfer resistance of Pt-MWCNT6h in the electrochemical impedance spectroscopy (EIS) test was 0.09 Ohm cm-2, which was 48.86% lower than that of Pt/C. This study will discuss the developed catalysts and their efficacy in a working fuel cell system. This research will validate the impact of low-functionalization modification of MWCNTs on the performance of PEMFC, which simplifies the preparation challenges of CL and contributing for the widespread commercial application of PEMFCs on a larger scale.

Keywords: carbon nanotubes, electrocatalyst, membrane electrode assembly, proton exchange membrane fuel cell

Procedia PDF Downloads 37
21 Case Study Hyperbaric Oxygen Therapy for Idiopathic Sudden Sensorineural Hearing Loss

Authors: Magdy I. A. Alshourbagi

Abstract:

Background: The National Institute for Deafness and Communication Disorders defines idiopathic sudden sensorineural hearing loss as the idiopathic loss of hearing of at least 30 dB across 3 contiguous frequencies occurring within 3 days.The most common clinical presentation involves an individual experiencing a sudden unilateral hearing loss, tinnitus, a sensation of aural fullness and vertigo. The etiologies and pathologies of ISSNHL remain unclear. Several pathophysiological mechanisms have been described including: vascular occlusion, viral infections, labyrinthine membrane breaks, immune associated disease, abnormal cochlear stress response, trauma, abnormal tissue growth, toxins, ototoxic drugs and cochlear membrane damage. The rationale for the use of hyperbaric oxygen to treat ISSHL is supported by an understanding of the high metabolism and paucity of vascularity to the cochlea. The cochlea and the structures within it require a high oxygen supply. The direct vascular supply, particularly to the organ of Corti, is minimal. Tissue oxygenation to the structures within the cochlea occurs via oxygen diffusion from cochlear capillary networks into the perilymph and the cortilymph. . The perilymph is the primary oxygen source for these intracochlear structures. Unfortunately, perilymph oxygen tension is decreased significantly in patients with ISSHL. To achieve a consistent rise of perilymph oxygen content, the arterial-perilymphatic oxygen concentration difference must be extremely high. This can be restored with hyperbaric oxygen therapy. Subject and Methods: A 37 year old man was presented at the clinic with a five days history of muffled hearing and tinnitus of the right ear. Symptoms were sudden onset, with no associated pain, dizziness or otorrhea and no past history of hearing problems or medical illness. Family history was negative. Physical examination was normal. Otologic examination revealed normal tympanic membranes bilaterally, with no evidence of cerumen or middle ear effusion. Tuning fork examination showed positive Rinne test bilaterally but with lateralization of Weber test to the left side, indicating right ear sensorineural hearing loss. Audiometric analysis confirmed sensorineural hearing loss across all frequencies of about 70- dB in the right ear. Routine lab work were all within normal limits. Clinical diagnosis of idiopathic sudden sensorineural hearing loss of the right ear was made and the patient began a medical treatment (corticosteroid, vasodilator and HBO therapy). The recommended treatment profile consists of 100% O2 at 2.5 atmospheres absolute for 60 minutes daily (six days per week) for 40 treatments .The optimal number of HBOT treatments will vary, depending on the severity and duration of symptomatology and the response to treatment. Results: As HBOT is not yet a standard for idiopathic sudden sensorineural hearing loss, it was introduced to this patient as an adjuvant therapy. The HBOT program was scheduled for 40 sessions, we used a 12-seat multi place chamber for the HBOT, which was started at day seven after the hearing loss onset. After the tenth session of HBOT, improvement of both hearing (by audiogram) and tinnitus was obtained in the affected ear (right). Conclusions: In conclusion, HBOT may be used for idiopathic sudden sensorineural hearing loss as an adjuvant therapy. It may promote oxygenation to the inner ear apparatus and revive hearing ability. Patients who fail to respond to oral and intratympanic steroids may benefit from this treatment. Further investigation is warranted, including animal studies to understand the molecular and histopathological aspects of HBOT and randomized control clinical studies.

Keywords: idiopathic sudden sensorineural hearing loss (issnhl), hyperbaric oxygen therapy (hbot), the decibel (db), oxygen (o2)

Procedia PDF Downloads 408
20 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 142
19 Improving Diagnostic Accuracy of Ankle Syndesmosis Injuries: A Comparison of Traditional Radiographic Measurements and Computed Tomography-Based Measurements

Authors: Yasar Samet Gokceoglu, Ayse Nur Incesu, Furkan Okatar, Berk Nimetoglu, Serkan Bayram, Turgut Akgul

Abstract:

Ankle syndesmosis injuries pose a significant challenge in orthopedic practice due to their potential for prolonged recovery and chronic ankle dysfunction. Accurate diagnosis and management of these injuries are essential for achieving optimal patient outcomes. The use of radiological methods, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a vital role in the accurate diagnosis of syndesmosis injuries in the context of ankle fractures. Treatment options for ankle syndesmosis injuries vary, with surgical interventions such as screw fixation and suture-button implantation being commonly employed. The choice of treatment is influenced by the severity of the injury and the presence of associated fractures. Additionally, the mechanism of injury, such as pure syndesmosis injury or specific fracture types, can impact the stability and management of syndesmosis injuries. Ankle fractures with syndesmosis injury present a complex clinical scenario, requiring accurate diagnosis, appropriate reduction, and tailored management strategies. The interplay between the mechanism of injury, associated fractures, and treatment modalities significantly influences the outcomes of these challenging injuries. The long-term outcomes and patient satisfaction following ankle fractures with syndesmosis injury are crucial considerations in the field of orthopedics. Patient-reported outcome measures, such as the Foot and Ankle Outcome Score (FAOS), provide essential information about functional recovery and quality of life after these injuries. When diagnosing syndesmosis injuries, standard measurements, such as the medial clear space, tibiofibular overlap, tibiofibular clear space, anterior tibiofibular ratio (ATFR), and the anterior-posterior tibiofibular ratio (APTF), are assessed through radiographs and computed tomography (CT) scans. These parameters are critical in evaluating the presence and severity of syndesmosis injuries, enabling clinicians to choose the most appropriate treatment approach. Despite advancements in diagnostic imaging, challenges remain in accurately diagnosing and treating ankle syndesmosis injuries. Traditional diagnostic parameters, while beneficial, may not capture the full extent of the injury or provide sufficient information to guide therapeutic decisions. This gap highlights the need for exploring additional diagnostic parameters that could enhance the accuracy of syndesmosis injury diagnoses and inform treatment strategies more effectively. The primary goal of this research is to evaluate the usefulness of traditional radiographic measurements in comparison to new CT-based measurements for diagnosing ankle syndesmosis injuries. Specifically, this study aims to assess the accuracy of conventional parameters, including medial clear space, tibiofibular overlap, tibiofibular clear space, ATFR, and APTF, in contrast with the recently proposed CT-based measurements such as the delta and gamma angles. Moreover, the study intends to explore the relationship between these diagnostic parameters and functional outcomes, as measured by the Foot and Ankle Outcome Score (FAOS). Establishing a correlation between specific diagnostic measurements and FAOS scores will enable us to identify the most reliable predictors of functional recovery following syndesmosis injuries. This comparative analysis will provide valuable insights into the accuracy and dependability of CT-based measurements in diagnosing ankle syndesmosis injuries and their potential impact on predicting patient outcomes. The results of this study could greatly influence clinical practices by refining diagnostic criteria and optimizing treatment planning for patients with ankle syndesmosis injuries.

Keywords: ankle syndesmosis injury, diagnostic accuracy, computed tomography, radiographic measurements, Tibiofibular syndesmosis distance

Procedia PDF Downloads 39
18 Development of Anti-Fouling Surface Features Bioinspired by the Patterned Micro-Textures of the Scophthalmus rhombus (Brill)

Authors: Ivan Maguire, Alan Barrett, Alex Forte, Sandra Kwiatkowska, Rohit Mishra, Jens Ducrèe, Fiona Regan

Abstract:

Biofouling is defined as the gradual accumulation of Biomimetics refers to the use and imitation of principles copied from nature. Biomimetics has found interest across many commercial disciplines. Among many biological objects and their functions, aquatic animals deserve a special attention due to their antimicrobial capabilities resulting from chemical composition, surface topography or other behavioural defences, which can be used as an inspiration for antifouling technology. Marine biofouling has detrimental effects on seagoing vessels, both commercial and leisure, as well as on oceanographic sensors, offshore drilling rigs, and aquaculture installations. Sensor optics, membranes, housings and platforms can become fouled leading to problems with sensor performance and data integrity. While many anti-fouling solutions are currently being investigated as a cost-cutting measure, biofouling settlement may also be prevented by creating a surface that does not satisfy the settlement conditions. Brill (Scophthalmus rhombus) is a small flatfish occurring in marine waters of Mediterranean as well as Norway and Iceland. It inhabits sandy and muddy coastal waters from 5 to 80 meters. Its skin colour changes depending on environment, but generally is brownish with light and dark freckles, with creamy underside. Brill is oval in shape and its flesh is white. The aim of this study is to translate the unique micro-topography of the brill scale, to design marine inspired biomimetic surface coating and test it against a typical fouling organism. Following extensive study of scale topography of the brill fish (Scophthalmus rhombus) and the settlement behaviour of the diatom species Psammodictyon sp. via SEM, two state-of-the-art antifouling surface solutions were designed and investigated; A brill fish scale bioinspired surface pattern platform (BFD), and generic and uniformly-arrayed, circular micropillar platform (MPD), with offsets based on diatom species settlement behaviour. The BFD approach consists of different ~5 μm by ~90 μm Brill-replica patterns, grown to a 5 μm height, in a linear array pattern. The MPD approach utilises hexagonal-packed cylindrical pillars 10.6 μm in diameter, grown to a height of 5 μm, with vertical offset of 15 μm and horizontal offset of 26.6 μm. Photolithography was employed for microstructure growth, with a polydimethylsiloxane (PDMS) chip-based used as a testbed for diatom adhesion on both platforms. Settlement and adhesion tests were performed using this PDMS microfluidic chip through subjugation to centrifugal force via an in-house developed ‘spin-stand’ which features a motor, in combination with a high-resolution camera, for real-time observing diatom release from PDMS material. Diatom adhesion strength can therefore be determined based on the centrifugal force generated at varying rotational speeds. It is hoped that both the replica and bio-inspired solutions will give comparable anti-fouling results to these synthetic surfaces, whilst also assisting in determining whether anti-fouling solutions should predominantly be investigating either fully bioreplica-based, or a bioinspired, synthetically-based design.

Keywords: anti-fouling applications, bio-inspired microstructures, centrifugal microfluidics, surface modification

Procedia PDF Downloads 293
17 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet

Authors: Justin Woulfe

Abstract:

Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.

Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics

Procedia PDF Downloads 130
16 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria

Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde

Abstract:

An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.

Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component

Procedia PDF Downloads 86
15 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 116
14 Investigation on Pull-Out-Behavior and Interface Critical Parameters of Polymeric Fibers Embedded in Concrete and Their Correlation with Particular Fiber Characteristics

Authors: Michael Sigruener, Dirk Muscat, Nicole Struebbe

Abstract:

Fiber reinforcement is a state of the art to enhance mechanical properties in plastics. For concrete and civil engineering, steel reinforcements are commonly used. Steel reinforcements show disadvantages in their chemical resistance and weight, whereas polymer fibers' major problems are in fiber-matrix adhesion and mechanical properties. In spite of these facts, longevity and easy handling, as well as chemical resistance motivate researches to develop a polymeric material for fiber reinforced concrete. Adhesion and interfacial mechanism in fiber-polymer-composites are already studied thoroughly. For polymer fibers used as concrete reinforcement, the bonding behavior still requires a deeper investigation. Therefore, several differing polymers (e.g., polypropylene (PP), polyamide 6 (PA6) and polyetheretherketone (PEEK)) were spun into fibers via single screw extrusion and monoaxial stretching. Fibers then were embedded in a concrete matrix, and Single-Fiber-Pull-Out-Tests (SFPT) were conducted to investigate bonding characteristics and microstructural interface of the composite. Differences in maximum pull-out-force, displacement and slope of the linear part of force vs displacement-function, which depicts the adhesion strength and the ductility of the interfacial bond were studied. In SFPT fiber, debonding is an inhomogeneous process, where the combination of interfacial bonding and friction mechanisms add up to a resulting value. Therefore, correlations between polymeric properties and pull-out-mechanisms have to be emphasized. To investigate these correlations, all fibers were introduced to a series of analysis such as differential scanning calorimetry (DSC), contact angle measurement, surface roughness and hardness analysis, tensile testing and scanning electron microscope (SEM). Of each polymer, smooth and abraded fibers were tested, first to simulate the abrasion and damage caused by a concrete mixing process and secondly to estimate the influence of mechanical anchoring of rough surfaces. In general, abraded fibers showed a significant increase in maximum pull-out-force due to better mechanical anchoring. Friction processes therefore play a major role to increase the maximum pull-out-force. The polymer hardness affects the tribological behavior and polymers with high hardness lead to lower surface roughness verified by SEM and surface roughness measurements. This concludes into a decreased maximum pull-out-force for hard polymers. High surface energy polymers show better interfacial bonding strength in general, which coincides with the conducted SFPT investigation. Polymers such as PEEK or PA6 show higher bonding strength in smooth and roughened fibers, revealed through high pull-out-force and concrete particles bonded on the fiber surface pictured via SEM analysis. The surface energy divides into dispersive and polar part, at which the slope is correlating with the polar part. Only polar polymers increase their SFPT-function slope due to better wetting abilities when showing a higher bonding area through rough surfaces. Hence, the maximum force and the bonding strength of an embedded fiber is a function of polarity, hardness, and consequently surface roughness. Other properties such as crystallinity or tensile strength do not affect bonding behavior. Through the conducted analysis, it is now feasible to understand and resolve different effects in pull-out-behavior step-by-step based on the polymer properties itself. This investigation developed a roadmap on how to engineer high adhering polymeric materials for fiber reinforcement of concrete.

Keywords: fiber-matrix interface, polymeric fibers, fiber reinforced concrete, single fiber pull-out test

Procedia PDF Downloads 91