Search results for: fault prediction
68 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 6967 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 10566 Middle School as a Developmental Context for Emergent Citizenship
Authors: Casta Guillaume, Robert Jagers, Deborah Rivas-Drake
Abstract:
Civically engaged youth are critical to maintaining and/or improving the functioning of local, national and global communities and their institutions. The present study investigated how school climate and academic beliefs (academic self-efficacy and school belonging) may inform emergent civic behaviors (emergent citizenship) among self-identified middle school youth of color (African American, Multiracial or Mixed, Latino, Asian American or Pacific Islander, Native American, and other). Study aims: 1) Understand whether and how school climate is associated with civic engagement behaviors, directly and indirectly, by fostering a positive sense of connection to the school and/or engendering feelings of self-efficacy in the academic domain. Accordingly, we examined 2) The association of youths’ sense of school connection and academic self-efficacy with their personally responsible and participatory civic behaviors in school and community contexts—both concurrently and longitudinally. Data from two subsamples of a larger study of social/emotional development among middle school students were used for longitudinal and cross sectional analysis. The cross-sectional sample included 324 6th-8th grade students, of which 43% identified as African American, 20% identified as Multiracial or Mixed, 18% identified as Latino, 12% identified as Asian American or Pacific Islander, 6% identified as Other, and 1% identified as Native American. The age of the sample ranged from 11 – 15 (M = 12.33, SD = .97). For the longitudinal test of our mediation model, we drew on data from the 6th and 7th grade cohorts only (n =232); the ethnic and racial diversity of this longitudinal subsample was virtually identical to that of the cross-sectional sample. For both the cross-sectional and longitudinal analyses, full information maximum likelihood was used to deal with missing data. Fit indices were inspected to determine if they met the recommended thresholds of RMSEA below .05 and CFI and TLI values of at least .90. To determine if particular mediation pathways were significant, the bias-corrected bootstrap confidence intervals for each indirect pathway were inspected. Fit indices for the latent variable mediation model using the cross-sectional data suggest that the hypothesized model fit the observed data well (CFI = .93; TLI =. 92; RMSEA = .05, 90% CI = [.04, .06]). In the model, students’ perceptions of school climate were significantly and positively associated with greater feelings of school connectedness, which were in turn significantly and positively associated with civic engagement. In addition, school climate was significantly and positively associated with greater academic self-efficacy, but academic self-efficacy was not significantly associated with civic engagement. Tests of mediation indicated there was one significant indirect pathway between school climate and civic engagement behavior. There was an indirect association between school climate and civic engagement via its association with sense of school connectedness, indirect association estimate = .17 [95% CI: .08, .32]. The aforementioned indirect association via school connectedness accounted for 50% (.17/.34) of the total effect. Partial support was found for the prediction that students’ perceptions of a positive school climate are linked to civic engagement in part through their role in students’ sense of connection to school.Keywords: civic engagement, early adolescence, school climate, school belonging, developmental niche
Procedia PDF Downloads 37065 The Role of Supply Chain Agility in Improving Manufacturing Resilience
Authors: Maryam Ziaee
Abstract:
This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.Keywords: agility, manufacturing, resilience, supply chain
Procedia PDF Downloads 9164 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 15663 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39362 Isolation and Characterization of a Narrow-Host Range Aeromonas hydrophila Lytic Bacteriophage
Authors: Sumeet Rai, Anuj Tyagi, B. T. Naveen Kumar, Shubhkaramjeet Kaur, Niraj K. Singh
Abstract:
Since their discovery, indiscriminate use of antibiotics in human, veterinary and aquaculture systems has resulted in global emergence/spread of multidrug-resistant bacterial pathogens. Thus, the need for alternative approaches to control bacterial infections has become utmost important. High selectivity/specificity of bacteriophages (phages) permits the targeting of specific bacteria without affecting the desirable flora. In this study, a lytic phage (Ahp1) specific to Aeromonas hydrophila subsp. hydrophila was isolated from finfish aquaculture pond. The host range of Ahp1 range was tested against 10 isolates of A. hydrophila, 7 isolates of A. veronii, 25 Vibrio cholerae isolates, 4 V. parahaemolyticus isolates and one isolate each of V. harveyi and Salmonella enterica collected previously. Except the host A. hydrophila subsp. hydrophila strain, no lytic activity against any other bacterial was detected. During the adsorption rate and one-step growth curve analysis, 69.7% of phage particles were able to get adsorbed on host cell followed by the release of 93 ± 6 phage progenies per host cell after a latent period of ~30 min. Phage nucleic acid was extracted by column purification methods. After determining the nature of phage nucleic acid as dsDNA, phage genome was subjected to next-generation sequencing by generating paired-end (PE, 2 x 300bp) reads on Illumina MiSeq system. De novo assembly of sequencing reads generated circular phage genome of 42,439 bp with G+C content of 58.95%. During open read frame (ORF) prediction and annotation, 22 ORFs (out of 49 total predicted ORFs) were functionally annotated and rest encoded for hypothetical proteins. Proteins involved in major functions such as phage structure formation and packaging, DNA replication and repair, DNA transcription and host cell lysis were encoded by the phage genome. The complete genome sequence of Ahp1 along with gene annotation was submitted to NCBI GenBank (accession number MF683623). Stability of Ahp1 preparations at storage temperatures of 4 °C, 30 °C, and 40 °C was studied over a period of 9 months. At 40 °C storage, phage counts declined by 4 log units within one month; with a total loss of viability after 2 months. At 30 °C temperature, phage preparation was stable for < 5 months. On the other hand, phage counts decreased by only 2 log units over a period of 9 during storage at 4 °C. As some of the phages have also been reported as glycerol sensitive, the stability of Ahp1 preparations in (0%, 15%, 30% and 45%) glycerol stocks were also studied during storage at -80 °C over a period of 9 months. The phage counts decreased only by 2 log units during storage, and no significant difference in phage counts was observed at different concentrations of glycerol. The Ahp1 phage discovered in our study had a very narrow host range and it may be useful for phage typing applications. Moreover, the endolysin and holin genes in Ahp1 genome could be ideal candidates for recombinant cloning and expression of antimicrobial proteins.Keywords: Aeromonas hydrophila, endolysin, phage, narrow host range
Procedia PDF Downloads 16361 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 12460 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 39059 Toy Engagement Patterns in Infants with a Familial History of Autism Spectrum Disorder
Authors: Vanessa Do, Lauren Smith, Leslie Carver
Abstract:
It is widely known that individuals with autism spectrum disorder (ASD) may exhibit sensitivity to stimuli. Even at a young age, they tend to display stimuli-related discomfort in their behavior during play. Play serves a crucial role in a child’s early years as it helps support healthy brain development, socio-emotional skills, and adaptation to their environment There is research dedicated to studying infant preferences for toys, especially in regard to: gender preferences, the advantages of promoting play, and the caregiver’s role in their child’s play routines. However, there is a disproportionate amount of literature examining how play patterns may differ in children with sensory sensitivity, such as children diagnosed with ASD. Prior literature has studied and found supporting evidence that individuals with ASD have deficits in social communication and have increased presence of repetitive behaviors and/or restricted interests, which also display in early childhood play patterns. This study aims to examine potential differences in toy preference between infants with (FH+) and without (FH-) a familial history of ASD ages 6. 9, and 12 months old. More specifically, this study will address the question, “do FH+ infants tend to play more with toys that require less social engagement compared to FH- infants?” Infants and their caregivers were recruited and asked to engage in a free-play session in their homes that lasted approximately 5 minutes. The sessions were recorded and later coded offline for engagement behaviors categorized by toy; each toy that the infants interacted with was coded as belonging to one of 6 categories: sensory (designed to stimulate one or more senses such as light-up toys or musical toys) , construction (e.g., building blocks, rubber suction cups), vehicles (e.g., toy cars), instructional (require steps to accomplish a goal such as flip phones or books), imaginative (e.g., dolls, stuffed animals), and miscellaneous (toys that do not fit into these categories). Toy engagement was defined as the infant looking and touching the toy (ILT) or looking at the toy while their caregiver was holding it (IL-CT). Results reported include/will include the proportion of time the infant was actively engaged with the toy out of the total usable video time per subject — distractions observed during the session were excluded from analysis. Data collection is still ongoing; however, the prediction is that FH+ infants will have higher engagement with sensory and construction toys as they require the least amount of social effort. Furthermore, FH+ infants will have the least engagement with the imaginative toys as prior literature has supported the claim that individuals with ASD have a decreased likelihood to engage in play that requires pretend play and other social skills. Looking at what toys are more or less engaging to FH+ infants is important as it provides significant contributions to their healthy cognitive, social, and emotional development. As play is one of the first ways for a child to understand the complexities of the larger world, the findings of this study may help guide further research into encouraging play with toys that are more engaging and sensory-sensitive for children with ASD.Keywords: autism engagement, children’s play, early development, free-play, infants, toy
Procedia PDF Downloads 22158 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 3657 Mathematical Modeling of Avascular Tumor Growth and Invasion
Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler
Abstract:
Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids
Procedia PDF Downloads 11356 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 17955 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 16654 South African Multiple Deprivation-Concentration Index Quantiles Differentiated by Components of Success and Impediment to Tuberculosis Control Programme Using Mathematical Modelling in Rural O. R. Tambo District Health Facilities
Authors: Ntandazo Dlatu, Benjamin Longo-Mbenza, Andre Renzaho, Ruffin Appalata, Yolande Yvonne Valeria Matoumona Mavoungou, Mbenza Ben Longo, Kenneth Ekoru, Blaise Makoso, Gedeon Longo Longo
Abstract:
Background: The gap between complexities related to the integration of Tuberculosis /HIV control and evidence-based knowledge motivated the initiation of the study. Therefore, the objective of this study was to explore correlations between national TB management guidelines, multiple deprivation indexes, quantiles, components and levels of Tuberculosis control programme using mathematical modeling in rural O.R. Tambo District Health Facilities, South Africa. Methods: The study design used mixed secondary data analysis and cross-sectional analysis between 2009 and 2013 across O.R Tambo District, Eastern Cape, South Africa using univariate/ bivariate analysis, linear multiple regression models, and multivariate discriminant analysis. Health inequalities indicators and component of an impediment to the tuberculosis control programme were evaluated. Results: In total, 62 400 records for TB notification were analyzed for the period 2009-2013. There was a significant but negative between Financial Year Expenditure (r= -0.894; P= 0.041) Seropositive HIV status(r= -0.979; P= 0.004), Population Density (r = -0.881; P= 0.048) and the number of TB defaulter in all TB cases. It was shown unsuccessful control of TB management program through correlations between numbers of new PTB smear positive, TB defaulter new smear-positive, TB failure all TB, Pulmonary Tuberculosis case finding index and deprivation-concentration-dispersion index. It was shown successful TB program control through significant and negative associations between declining numbers of death in co-infection of HIV and TB, TB deaths all TB and SMIAD gradient/ deprivation-concentration-dispersion index. The multivariate linear model was summarized by unadjusted r of 96%, adjusted R2 of 95 %, Standard Error of estimate of 0.110, R2 changed of 0.959 and significance for variance change for P=0.004 to explain the prediction of TB defaulter in all TB with equation y= 8.558-0.979 x number of HIV seropositive. After adjusting for confounding factors (PTB case finding the index, TB defaulter new smear-positive, TB death in all TB, TB defaulter all TB, and TB failure in all TB). The HIV and TB death, as well as new PTB smear positive, were identified as the most important, significant, and independent indicator to discriminate most deprived deprivation index far from other deprivation quintiles 2-5 using discriminant analysis. Conclusion: Elimination of poverty such as overcrowding, lack of sanitation and environment of highest burden of HIV might end the TB threat in O.R Tambo District, Eastern Cape, South Africa. Furthermore, ongoing adequate budget comprehensive, holistic and collaborative initiative towards Sustainable Developmental Goals (SDGs) is necessary for complete elimination of TB in poor O.R Tambo District.Keywords: tuberculosis, HIV/AIDS, success, failure, control program, health inequalities, South Africa
Procedia PDF Downloads 17153 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media
Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca
Abstract:
Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks
Procedia PDF Downloads 19852 Management of Myofascial Temporomandibular Disorder in Secondary Care: A Quality Improvement Project
Authors: Rishana Bilimoria, Selina Tang, Sajni Shah, Marianne Henien, Christopher Sproat
Abstract:
Temporomandibular disorders (TMD) may affect up to a third of the general population, and there is evidence demonstrating the majority of Myofascial TMD cases improve after education and conservative measures. In 2015 our department implemented a modified care pathway for myofascial TMD patients in an attempt to improve the patient journey. This involved the use of an interactive group therapy approach to deliver education, reinforce conservative measures and promote self-management. Patient reported experience measures from the new group clinic revealed 71% patient satisfaction. This service is efficient in improving aspects of health status while reducing health-care costs and redistributing clinical time. Since its’ establishment, 52 hours of clinical time, resources and funding have been redirected effectively. This Quality Improvement Project was initiated because it was felt that this new service was being underutilised by our surgical teams. The ‘Plan-Do-Study-Act cycle’ (PDSA) framework was employed to analyse utilisation of the service: The ‘plan’ stage involved outlining our aims: to raise awareness amongst clinicians of the unified care pathway and to increase referral to this clinic. The ‘do’ stage involved collecting data from a sample of 96 patients over 4 month period to ascertain the proportion of Myofascial TMD patients who were correctly referred to the designated clinic. ‘Suitable’ patients who weren’t referred were identified. The ‘Study’ phase involved analysis of results, which revealed that 77% of suitable patients weren’t referred to the designated clinic. They were reviewed on other clinics, which are often overbooked, or managed by junior staff members. This correlated with our original prediction. Barriers to referral included: lack of awareness of the clinic, individual consultant treatment preferences and patient, reluctance to be referred to a ‘group’ clinic. The ‘Act’ stage involved presenting our findings to the team at a clinical governance meeting. This included demonstration of the clinical effectiveness of the care-pathway and explaining the referral route and criteria. In light of the evaluation results, it was decided to keep the group clinic and maximize utilisation. The second cycle of data collection following these changes revealed that of 66 Myofascial TMD patients over a 4 month period, only 9% of suitable patients were not seen via the designated pathway; therefore this QIP was successful in meeting the set objectives. Overall, employing the PDSA cycle in this QIP resulted in appropriate utilisation of the modified care pathway for patients with myofascial TMD in Guy’s Oral Surgery Department. In turn, this leads to high patient satisfaction with the service and effectively redirected 52 hours of clinical time. It permitted adoption of a collaborative working style with oral surgery colleagues to investigate problems, identify solutions, and collectively raise standards of clinical care to ensure we adopt a unified care pathway in secondary care management of Myofascial TMD patients.Keywords: myofascial, quality Improvement, PDSA, TMD
Procedia PDF Downloads 14151 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples
Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann
Abstract:
Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide
Procedia PDF Downloads 25950 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 10749 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models
Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi
Abstract:
Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel
Procedia PDF Downloads 18148 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 13547 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction
Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer
Abstract:
History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19
Procedia PDF Downloads 17546 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts
Authors: Maria Ledinskaya
Abstract:
This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management
Procedia PDF Downloads 7145 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 20644 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 28043 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 13942 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 7541 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 7240 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher
Abstract:
The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with aKeywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy
Procedia PDF Downloads 12639 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology
Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais
Abstract:
The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology
Procedia PDF Downloads 314