Search results for: white box testing
235 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction
Authors: Ahlad Neti, Elisa Arch, Martha Hall
Abstract:
Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices
Procedia PDF Downloads 155234 Polyurethane Membrane Mechanical Property Study for a Novel Carotid Covered Stent
Authors: Keping Zuo, Jia Yin Chia, Gideon Praveen Kumar Vijayakumar, Foad Kabinejadian, Fangsen Cui, Pei Ho, Hwa Liang Leo
Abstract:
Carotid artery is the major vessel supplying blood to the brain. Carotid artery stenosis is one of the three major causes of stroke and the stroke is the fourth leading cause of death and the first leading cause of disability in most developed countries. Although there is an increasing interest in carotid artery stenting for treatment of cervical carotid artery bifurcation therosclerotic disease, currently available bare metal stents cannot provide an adequate protection against the detachment of the plaque fragments over diseased carotid artery, which could result in the formation of micro-emboli and subsequent stroke. Our research group has recently developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet retaining the ability to preserve the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid therosclerotic stenosis. The purpose of this study is to evaluate the biomechanical property of PU membrane of different concentration configurations in order to refine the stent coating technique and enhance the clinical performance of our novel carotid covered stent. Results from this study also provide necessary material property information crucial for accurate simulation analysis for our stents. Method: Medical grade Polyurethane (ChronoFlex AR) was used to prepare PU membrane specimens. Different PU membrane configurations were subjected to uniaxial test: 22%, 16%, and 11% PU solution were made by mixing the original solution with proper amount of the Dimethylacetamide (DMAC). The specimens were then immersed in physiological saline solution for 24 hours before test. All specimens were moistened with saline solution before mounting and subsequent uniaxial testing. The specimens were preconditioned by loading the PU membrane sample to a peak stress of 5.5 Mpa for 10 consecutive cycles at a rate of 50 mm/min. The specimens were then stretched to failure at the same loading rate. Result: The results showed that the stress-strain response curves of all PU membrane samples exhibited nonlinear characteristic. For the ultimate failure stress, 22% PU membrane was significantly higher than 16% (p<0.05). In general, our preliminary results showed that lower concentration PU membrane is stiffer than the higher concentration one. From the perspective of mechanical properties, 22% PU membrane is a better choice for the covered stent. Interestingly, the hyperelastic Ogden model is able to accurately capture the nonlinear, isotropic stress-strain behavior of PU membrane with R2 of 0.9977 ± 0.00172. This result will be useful for future biomechanical analysis of our stent designs and will play an important role for computational modeling of our covered stent fatigue study.Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain
Procedia PDF Downloads 307233 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites
Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga
Abstract:
Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing
Procedia PDF Downloads 123232 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 70231 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 22230 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 88229 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 165228 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 51227 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 106226 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber
Procedia PDF Downloads 2225 An Exploratory Study on the Integration of Neurodiverse University Students into Mainstream Learning and Their Performance: The Case of the Jones Learning Center
Authors: George Kassar, Phillip A. Cartwright
Abstract:
Based on data collected from The Jones Learning Center (JLC), University of the Ozarks, Arkansas, U.S., this study explores the impact of inclusive classroom practices on neuro-diverse college students’ and their consequent academic performance having participated in integrative therapies designed to support students who are intellectually capable of obtaining a college degree, but who require support for learning challenges owing to disabilities, AD/HD, or ASD. The purpose of this study is two-fold. The first objective is to explore the general process, special techniques, and practices of the (JLC) inclusive program. The second objective is to identify and analyze the effectiveness of the processes, techniques, and practices in supporting the academic performance of enrolled college students with learning disabilities following integration into mainstream university learning. Integrity, transparency, and confidentiality are vital in the research. All questions were shared in advance and confirmed by the concerned management at the JLC. While administering the questionnaire as well as conducted the interviews, the purpose of the study, its scope, aims, and objectives were clearly explained to all participants prior starting the questionnaire / interview. Confidentiality of all participants assured and guaranteed by using encrypted identification of individuals, thus limiting access to data to only the researcher, and storing data in a secure location. Respondents were also informed that their participation in this research is voluntary, and they may withdraw from it at any time prior to submission if they wish. Ethical consent was obtained from the participants before proceeding with videorecording of the interviews. This research uses a mixed methods approach. The research design involves collecting, analyzing, and “mixing” quantitative and qualitative methods and data to enable a research inquiry. The research process is organized based on a five-pillar approach. The first three pillars are focused on testing the first hypothesis (H1) directed toward determining the extent to the academic performance of JLC students did improve after involvement with comprehensive JLC special program. The other two pillars relate to the second hypothesis (H2), which is directed toward determining the extent to which collective and applied knowledge at JLC is distinctive from typical practices in the field. The data collected for research were obtained from three sources: 1) a set of secondary data in the form of Grade Point Average (GPA) received from the registrar, 2) a set of primary data collected throughout structured questionnaire administered to students and alumni at JLC, and 3) another set of primary data collected throughout interviews conducted with staff and educators at JLC. The significance of this study is two folds. First, it validates the effectiveness of the special program at JLC for college-level students who learn differently. Second, it identifies the distinctiveness of the mix of techniques, methods, and practices, including the special individualized and personalized one-on-one approach at JLC.Keywords: education, neuro-diverse students, program effectiveness, Jones learning center
Procedia PDF Downloads 72224 Effect of Electric Arc Furnace Coarse Slag Aggregate And Ground Granulated Blast Furnace Slag on Mechanical and Durability Properties of Roller Compacted Concrete Pavement
Authors: Amiya Kumar Thakur, Dinesh Ganvir, Prem Pal Bansal
Abstract:
Industrial by product utilization has been encouraged due to environment and economic factors. Since electric arc furnace slag aggregate is a by-product of steel industry and its storage is a major concern hence it can be used as a replacement of natural aggregate as its physical and mechanical property are comparable or better than the natural aggregates. The present study investigates the effect of partial and full replacement of natural coarse aggregate with coarse EAF slag aggregate and partial replacement of cement with ground granulated blast furnace slag (GGBFS) on the mechanical and durability properties of roller compacted concrete pavement (RCCP).The replacement level of EAF slag aggregate were at five levels (i.e. 0% ,25% ,50%,75% & 100%) and of GGBFS was (0 % & 30%).The EAF slag aggregate was stabilized by exposing to outdoor condition for several years and the volumetric expansion test using steam exposure device was done to check volume stability. Soil compaction method was used for mix proportioning of RCCP. The fresh properties of RCCP investigated were fresh density and modified vebe test was done to measure the consistency of concrete. For investigating the mechanical properties various tests were done at 7 and 28 days (i.e. Compressive strength, split tensile strength, flexure strength modulus of elasticity) and also non-destructive testing was done at 28 days (i.e. Ultra pulse velocity test (UPV) & rebound hammer test). The durability test done at 28 days were water absorption, skid resistance & abrasion resistance. The results showed that with the increase in slag aggregate percentage there was an increase in the fresh density of concrete and also slight increase in the vebe time but with the 30 % GGBFS replacement the vebe time decreased and the fresh density was comparable to 0% GGBFS mix. The compressive strength, split tensile strength, flexure strength & modulus of elasticity increased with the increase in slag aggregate percentage in concrete when compared to control mix. But with the 30 % GGBFS replacement there was slight decrease in mechanical properties when compared to 100 % cement concrete. In UPV test and rebound hammer test all the mixes showed excellent quality of concrete. With the increase in slag aggregate percentage in concrete there was an increase in water absorption, skid resistance and abrasion resistance but with the 30 % GGBFS percentage the skid resistance, water absorption and abrasion resistance decreased when compared to 100 % cement concrete. From the study it was found that the mix containing 30 % GGBFS with different percentages of EAF slag aggregate were having comparable results for all the mechanical and durability property when compared to 100 % cement mixes. Hence 30 % GGBFS can be used as cement replacement with 100 % EAF slag aggregate as natural coarse aggregate replacement.Keywords: durability properties, electric arc furnace slag aggregate, GGBFS, mechanical properties, roller compacted concrete pavement, soil compaction method
Procedia PDF Downloads 143223 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors
Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov
Abstract:
It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development
Procedia PDF Downloads 245222 Indeterminacy: An Urban Design Tool to Measure Resilience to Climate Change, a Caribbean Case Study
Authors: Tapan Kumar Dhar
Abstract:
How well are our city forms designed to adapt to climate change and its resulting uncertainty? What urban design tools can be used to measure and improve resilience to climate change, and how would they do so? In addressing these questions, this paper considers indeterminacy, a concept originated in the resilience literature, to measure the resilience of built environments. In the realm of urban design, ‘indeterminacy’ can be referred to as built-in design capabilities of an urban system to serve different purposes which are not necessarily predetermined. An urban system, particularly that with a higher degree of indeterminacy, can enable the system to be reorganized and changed to accommodate new or unknown functions while coping with uncertainty over time. Underlying principles of this concept have long been discussed in the urban design and planning literature, including open architecture, landscape urbanism, and flexible housing. This paper argues that the concept indeterminacy holds the potential to reduce the impacts of climate change incrementally and proactively. With regard to sustainable development, both planning and climate change literature highly recommend proactive adaptation as it involves less cost, efforts, and energy than last-minute emergency or reactive actions. Nevertheless, the concept still remains isolated from resilience and climate change adaptation discourses even though the discourses advocate the incremental transformation of a system to cope with climatic uncertainty. This paper considers indeterminacy, as an urban design tool, to measure and increase resilience (and adaptive capacity) of Long Bay’s coastal settlements in Negril, Jamaica. Negril is one of the popular tourism destinations in the Caribbean highly vulnerable to sea-level rise and its associated impacts. This paper employs empirical information obtained from direct observation and informal interviews with local people. While testing the tool, this paper deploys an urban morphology study, which includes land use patterns and the physical characteristics of urban form, including street networks, block patterns, and building footprints. The results reveal that most resorts in Long Bay are designed for pre-determined purposes and offer a little potential to use differently if needed. Additionally, Negril’s street networks are found to be rigid and have limited accessibility to different points of interest. This rigidity can expose the entire infrastructure further to extreme climatic events and also impedes recovery actions after a disaster. However, Long Bay still has room for future resilient developments in other relatively less vulnerable areas. In adapting to climate change, indeterminacy can be reached through design that achieves a balance between the degree of vulnerability and the degree of indeterminacy: the more vulnerable a place is, the more indeterminacy is useful. This paper concludes with a set of urban design typologies to increase the resilience of coastal settlements.Keywords: climate change adaptation, resilience, sea-level rise, urban form
Procedia PDF Downloads 364221 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy
Authors: Giacoma Pace
Abstract:
The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).Keywords: flipped classroom, impaired, inclusivity, peer instruction
Procedia PDF Downloads 52220 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals
Authors: Sami Houry
Abstract:
Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal
Procedia PDF Downloads 184219 Data Calibration of the Actual versus the Theoretical Micro Electro Mechanical Systems (MEMS) Based Accelerometer Reading through Remote Monitoring of Padre Jacinto Zamora Flyover
Authors: John Mark Payawal, Francis Aldrine Uy, John Paul Carreon
Abstract:
This paper shows the application of Structural Health Monitoring, SHM into bridges. Bridges are structures built to provide passage over a physical obstruction such as rivers, chasms or roads. The Philippines has a total of 8,166 national bridges as published on the 2015 atlas of the Department of Public Works and Highways (DPWH) and only 2,924 or 35.81% of these bridges are in good condition. As a result, PHP 30.464 billion of the 2016 budget of DPWH is allocated on roads and/or bridges maintenance alone. Intensive spending is owed to the present practice of outdated manual inspection and assessment, and poor structural health monitoring of Philippine infrastructures. As the School of Civil, Environmental, & Geological Engineering of Mapua Institute of Technology (MIT) continuous its well driven passion in research based projects, a partnership with the Department of Science and Technology (DOST) and the DPWH launched the application of Structural Health Monitoring, (SHM) in Padre Jacinto Zamora Flyover. The flyover is located along Nagtahan Boulevard in Sta. Mesa, Manila that connects Brgy. 411 and Brgy. 635. It gives service to vehicles going from Lacson Avenue to Mabini Bridge passing over Legarda Flyover. The flyover is chosen among the many located bridges in Metro Manila as the focus of the pilot testing due to its site accessibility, and complete structural built plans and specifications necessary for SHM as provided by the Bureau of Design, BOD department of DPWH. This paper focuses on providing a method to calibrate theoretical readings from STAAD Vi8 Pro and sync the data to actual MEMS accelerometer readings. It is observed that while the design standards used in constructing the flyover was reflected on the model, actual readings of MEMS accelerometer display a large difference compared to the theoretical data ran and taken from STAAD Vi8 Pro. In achieving a true seismic response of the modeled bridge or hence syncing the theoretical data to the actual sensor reading also called as the independent variable of this paper, analysis using single degree of freedom (SDOF) of the flyover under free vibration without damping using STAAD Vi8 Pro is done. The earthquake excitation and bridge responses are subjected to earthquake ground motion in the form of ground acceleration or Peak Ground Acceleration, PGA. Translational acceleration load is used to simulate the ground motion of the time history analysis acceleration record in STAAD Vi8 Pro.Keywords: accelerometer, analysis using single degree of freedom, micro electro mechanical system, peak ground acceleration, structural health monitoring
Procedia PDF Downloads 319218 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 173217 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 231216 Choosing Mountains Over the Beach: Evaluating the Effect of Altitude on Covid Brain Severity and Treatment
Authors: Kennedy Zinn, Chris Anderson
Abstract:
Chronic Covid syndrome (CCS) is a condition in which individuals who test positive for Covid-19 experience persistent symptoms after recovering from the virus. CCS affects every organ system, including the central nervous system. Neurological “long-haul” symptoms last from a few weeks to several months and include brain fog, chronic fatigue, dyspnea, mood dysregulation, and headaches. Data suggest that 10-30% of individuals testing positive for Covid-19 develop CCS. Current literature indicates a decreased quality of life in persistent symptoms. CCS is a pervasive and pernicious COVID-19 sequelae. More research is needed to understand risk factors, impact, and possible interventions. Research frequently cites cytokine storming as noteworthy etiology in CCS. Cytokine storming is a malfunctional immune response and facilitates multidimensional interconnected physiological responses. The most prominent responses include abnormal blood flow, hypoxia/hypoxemia, inflammation, and endothelial damage. Neurological impairments and pathogenesis in CCS parallel that of traumatic brain injury (TBI). Both exhibit impairments in memory, cognition, mood, sustained attention, and chronic fatigue. Evidence suggests abnormal blood flow, inflammation, and hypoxemia as shared causal factors. Cytokine storming is also typical in mTBI. The shared characteristics in symptoms and etiology suggest potential parallel routes of investigation that allow for better understanding of CCS. Research on the effect of altitude in mTBI varies. Literature finds decreased rates of concussions at higher altitudes. Other studies suggest that at a higher altitude, pre-existing mTBI symptoms are exacerbated. This may mean that in CCS, the geographical location where individuals live and the location where individuals experienced acute Covid-19 symptoms may influence the severity and risk of developing CCS. It also suggests that clinics which treat mTBI patients could also provide benefits for those with CCS. This study aims to examine the relationships between altitude and CCS as a risk factor and investigate the longevity and severity of symptoms in different altitudes. Existing patient data from a concussion clinic using fMRI scans and self-reported symptoms will be used for approximately 30 individuals with CCS symptoms. The association between acclimated altitude and CCS severity will be analyzed. Patients will be classified into low, medium, and high altitude groups and compared for differences on fMRI severity scores and self-reported measures. It is anticipated that individuals living in lower altitudes are at higher risk of developing more severe neuropsychological symptoms in CCS. It is also anticipated that a treatment approach for mTBI will also be beneficial to those with CCS.Keywords: altitude, chronic covid syndrome, concussion, covid brain, EPIC treatment, fMRI, traumatic brain injury
Procedia PDF Downloads 131215 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location
Procedia PDF Downloads 322214 Chemical Modifications of Three Underutilized Vegetable Fibres for Improved Composite Value Addition and Dye Absorption Performance
Authors: Abayomi O. Adetuyi, Jamiu M. Jabar, Samuel O. Afolabi
Abstract:
Vegetable fibres are classes of fibres of low density, biodegradable and non-abrasive that are largely abundant fibre materials with specific properties and mostly found/ obtained in plants on earth surface. They are classified into three categories, depending on the part of the plant from which they are gotten from namely: fruit, Blast and Leaf fibre. Ever since four/five millennium B.C, attention has been focussing on the commonest and highly utilized cotton fibre obtained from the fruit of cotton plants (Gossypium spp), for the production of cotton fabric used in every home today. The present study, therefore, focused on the ability of three underutilized vegetable (fruit) fibres namely: coir fiber (Eleas coniferus), palm kernel fiber and empty fruit bunch fiber (Elias guinensis) through chemical modifications for better composite value addition performance to polyurethane form and dye adsorption. These fibres were sourced from their parents’ plants, identified and cleansed with 2% hot detergent solution 1:100, rinsed in distilled water and oven-dried to constant weight, before been chemically modified through alkali bleaching, mercerization and acetylation. The alkali bleaching involves treating 0.5g of each fiber material with 100 mL of 2% H2O2 in 25 % NaOH solution with refluxing for 2 h. While that of mercerization and acetylation involves the use of 5% sodium hydroxide NaOH solution for 2 h and 10% acetic acid- acetic anhydride 1:1 (v/v) (CH3COOH) / (CH3CO)2O solution with conc. H2SO4 as catalyst for 1 h, respectively on the fibres. All were subsequently washed thoroughly with distilled water and oven dried at 105 0C for 1 h. These modified fibres were incorporated as composite into polyurethane form and used in dye adsorption study of indigo. The first two treatments led to fiber weight reduction, while the acidified acetic anhydride treatment gave the fibers weight increment. All the treated fibers were found to be of less hydrophilic nature, better mechanical properties, higher thermal stabilities as well as better adsorption surfaces/capacities than the untreated ones. These were confirmed by gravimetric analysis, Instron Universal Testing Machine, Thermogravimetric Analyser and the Scanning Electron Microscope (SEM) respectively. The fiber morphology of the modified fibers showed smoother surfaces than unmodified fibres.The empty fruit bunch fibre and the coconut coir fibre are better than the palm kernel fibres as reinforcers for composites or as adsorbents for waste-water treatment. Acetylation and alkaline bleaching treatment improve the potentials of the fibres more than mercerization treatment. Conclusively, vegetable fibres, especially empty fruit bunch fibre and the coconut coir fibre, which are cheap, abundant and underutilized, can replace the very costly powdered activated carbon in wastewater treatment and as reinforcer in foam.Keywords: chemical modification, industrial application, value addition, vegetable fibre
Procedia PDF Downloads 328213 Highly Automated Trucks In Intermodal Logistics: Findings From a Field Test in Railport and Container Depot Operations in Germany
Authors: Dustin Schöder
Abstract:
The potential benefits of the utilization of highly automated and autonomous trucks in logistics operations are the subject of interest to the entire logistics industry. The benefits of the use of these new technologies were scientifically investigated and implemented in roadmaps. So far, reliable data and experiences from real life use cases are still limited. A German research consortium of both academics and industry developed a highly automated (SAE level 4) vehicle for yard operations at railports and container depots. After development and testing, a several month field test at the DUSS Terminal in Ulm-Dornstadt (Germany) and the nearby DB Intermodal Services Container Depot in Ulm-Dornstadt was conducted. The truck was piloted in a shuttle service between both sites. In a holistic automation approach, the vehicle was integrated into a digital communication platform so that the truck could move autonomously without a driver and his manual interactions with a wide variety of stakeholders. The main goal is to investigate the effects of highly automated trucks in the key processes of container loading, unloading and container relocation on holistic railport yard operation. The field test data were used to investigate changes in process efficiency of key processes of railport and container yard operations. Moreover, effects on the capacity utilization and potentials for smothering peak workloads were analyzed. The results state that process efficiency in the piloted use case was significantly higher. The reason for that could be found in the digitalized data exchange and automated dispatch. However, the field test has shown that the effect is greatly varying depending on the ratio of highly automated and manual trucks in the yard as well as on the congestion level in the loading area. Furthermore, the data confirmed that under the right conditions, the capacity utilization of highly automated trucks could be increased. In regard to the potential for smothering peak workloads, no significant findings could be made based on the limited requirements and regulations of railway operation in Germany. In addition, an empirical survey among railport managers, operational supervisors, innovation managers and strategists (n=15) within the logistics industry in Germany was conducted. The goal was to identify key characteristics of future railports and terminals as well as requirements that railports will have to meet in the future. Furthermore, the railport processes where automation and autonomization make the greatest impact, as well as hurdles and challenges in the introduction of new technologies, have been surveyed. Hence, further potential use cases of highly automated and autonomous applications could be identified, and expectations have been mapped. As a result, a highly detailed and practice-based roadmap towards a ‘terminal 4.0’ was developed.Keywords: highly automated driving, autonomous driving, SAE level 4, railport operations, container depot, intermodal logistics, potentials of autonomization
Procedia PDF Downloads 78212 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling
Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva
Abstract:
Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.Keywords: energy saving, inverse problem, heat transfer, multilayer walling
Procedia PDF Downloads 397211 Influence of Temperature and Immersion on the Behavior of a Polymer Composite
Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli
Abstract:
This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical
Procedia PDF Downloads 116210 Mood Symptom Severity in Service Members with Posttraumatic Stress Symptoms after Service Dog Training
Authors: Tiffany Riggleman, Andrea Schultheis, Kalyn Jannace, Jerika Taylor, Michelle Nordstrom, Paul F. Pasquina
Abstract:
Introduction: Posttraumatic Stress (PTS) and Posttraumatic Stress Disorder (PTSD) remain significant problems for military and veteran communities. Symptoms of PTSD often include poor sleep, intrusive thoughts, difficulty concentrating, and trouble with emotional regulation. Unfortunately, despite its high prevalence, service members diagnosed with PTSD often do not seek help, usually because of the perceived stigma surrounding behavioral health care. To help address these challenges, non-pharmacological, therapeutic approaches are being developed to help improve care and enhance compliance. The Service Dog Training Program (SDTP), which involves teaching patients how to train puppies to become mobility service dogs, has been successfully implemented into PTS/PTSD care programs with anecdotal reports of improved outcomes. This study was designed to assess the biopsychosocial effects of SDTP from military beneficiaries with PTS symptoms. Methods: Individuals between the ages of 18 and 65 with PTS symptom were recruited to participate in this prospective study. Each subject completes 4 weeks of baseline testing, followed by 6 weeks of active service dog training (twice per week for one hour sessions) with a professional service dog trainer. Outcome measures included the Posttraumatic Stress Checklist for the DSM-5 (PCL-5), Generalized Anxiety Disorder questionnaire-7 (GAD-7), Patient Health Questionnaire-9 (PHQ-9), social support/interaction, anthropometrics, blood/serum biomarkers, and qualitative interviews. Preliminary analysis of 17 participants examined mean scores on the GAD-7, PCL-5, and PHQ-9, pre- and post-SDTP, and changes were assessed using Wilcoxon Signed-Rank tests. Results: Post-SDTP, there was a statistically significant mean decrease in PCL-5 scores of 13.5 on an 80-point scale (p=0.03) and a significant mean decrease of 2.2 in PHQ-9 scores on a 27 point scale (p=0.04), suggestive of decreased PTSD and depression symptoms. While there was a decrease in mean GAD-7 scores post-SDTP, the difference was not significant (p=0.20). Recurring themes among results from the qualitative interviews include decreased pain, forgetting about stressors, improved sense of calm, increased confidence, improved communication, and establishing a connection with the service dog. Conclusion: Preliminary results of the first 17 participants in this study suggest that individuals who received SDTP had a statistically significant decrease in PTS symptom, as measured by the PCL-5 and PHQ-9. This ongoing study seeks to enroll a total of 156 military beneficiaries with PTS symptoms. Future analyses will include additional psychological outcomes, pain scores, blood/serum biomarkers, and other measures of the social aspects of PTSD, such as relationship satisfaction and sleep hygiene.Keywords: post-concussive syndrome, posttraumatic stress, service dog, service dog training program, traumatic brain injury
Procedia PDF Downloads 112209 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency
Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar
Abstract:
In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index
Procedia PDF Downloads 27208 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion
Authors: Ali Kadir, O. Anwar Beg
Abstract:
Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.Keywords: thermal coating, corrosion, ANSYS FEA, CFD
Procedia PDF Downloads 134207 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication
Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons
Abstract:
The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture
Procedia PDF Downloads 165206 Anti-Angiogenic and Anti-Metastatic Effect of Aqueous Fraction from Euchelus Asper Methanolic Extract
Authors: Sweta Agrawal, Sachin Chaugule, Gargi Rane, Shashank More, Madhavi Indap
Abstract:
Angiogenesis and metastasis are two of the most important hallmarks of cancer. Hence, most of the cancer therapies nowadays are multi-targeted so as to reduce resistance and have better efficacy. As synthetic molecules arise with a burden of their toxicities and side-effects, more and more research is being focussed on exploiting the vast natural resources of drugs, in the form of plants and animals. Although, the idea of using marine organisms as a source of pharmaceuticals is not new, the pace at which marine drugs are being discovered, has definitely up surged! In the present study, we have assessed the anti-angiogenic and in vitro anti-metastatic activity of aqueous fraction from the extract of marine gastropod Euchelus asper. The soft body of Euchelus Asper was extracted with methanol and named EAME. Partition chromatography of EAME gave three fractions EAME I, II and III. Biochemical analysis revealed the presence of proteins in EAME III. Preliminary analysis had revealed the anti-angiogenic activity was exhibited by EAME III out of the three fractions. Hereafter, EAME III (concentration 25µg/ml-400µg/ml) was tested on chick chorioallantoic membrane (CAM) model for the detailed analysis of its potential anti-angiogenic effect. In vitro testing of the fraction (concentration 0.25µg/ml - 1µg/ml), involved cytotoxicity by SRB assay, cell cycle analysis by flow cytometry and anti-proliferative effect by scratch wound healing assay on A549 lung carcinoma cells. Apart from this, a portion of treated CAM as well as conditioned medium from treated A549 were subjected to gelatin zymography for assessment of matrix metalloproteinases MMP-2 and MMP-9 levels. Our results revealed that EAME III exhibited significant anti-angiogenic activity on CAM which was also supported by histological observations. During histological studies of CAM, it was found that EAME III caused reduction in angiogenesis by altering the extracellular matrix of the CAM membrane. In vitro analysis disclosed that EAME III exhibited moderate cytotoxic effect on A549 cells and its effect was not dose-dependent. The results of flow cytometry confirmed that EAME III caused cell cycle arrest in A549 cell line as almost all of the treated cells were found in G1 phase. Further, the migration and proliferation of A549 was significantly reduced by EAME III as observed from the scratch wound assay. Moreover, Gelatin zymography analysis revealed that EAME III caused suppression of MMP-2 in CAM membrane and reduced MMP-9 and MMP-2 expression in A549 cells. This verified that the anti-angiogenic and anti-metastatic effects of EAME III were correlated with the suppression of MMP-2 and -9. To conclude, EAME III shows dual anti-tumour action by reducing angiogenesis and exerting anti-metastatic effect on lung cancer cells, thus it has the potential to be used as an anti-cancer agent against lung carcinoma.Keywords: angiogenesis, anti-cancer, marine drugs, matrix metalloproteinases
Procedia PDF Downloads 230