Search results for: voltage-time curve
48 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 7047 Empowering Indigenous Epistemologies in Geothermal Development
Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui
Abstract:
Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework
Procedia PDF Downloads 18746 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests
Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski
Abstract:
Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.Keywords: bone, creep, indentation, mechanical properties
Procedia PDF Downloads 17245 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition
Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading
Abstract:
Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum
Procedia PDF Downloads 19844 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon
Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim
Abstract:
There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors
Procedia PDF Downloads 24443 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz
Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto
Abstract:
Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.Keywords: characterization, dissolution, Efavirenz, solid dispersions
Procedia PDF Downloads 63142 In vitro Antioxidant Activity and Total Phenolic Content of Dillenia indica and Garcinia penducalata, Commonly Used Fruits in Assamese Cuisine
Authors: M. Das, B. P. Sarma, G. Ahmed
Abstract:
Human diet can be a major source of antioxidants. Poly¬phenols, which are organic compounds present in the regular human diet, have good antioxidant property. Most of the diseases are detected too late and that cause irre¬versible damage to the body. Therefore food that forms the natural source of antioxidants can prevent free radi¬cals from damaging our body tissues. Dillenia indica and Garcinia penducalata are two major fruits, easily available in Assam, North eastern Indian state. In the present study, the in vitro antioxi¬dant properties of the fruits of these plants are compared as the decoction of these fruits form a major part of Assamese cuisine. DPPH free radical scavenging activity of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were carried out by the methods of Cotelle A et al. (1996). Different concentrations ranging from 10–110 ug/ml of the extracts were added to 100 uM of DPPH (2,2, Diphenyl-2-picryl hydrazyl) and the absor¬bance was read at 517 nm after incubation. Ascorbic acid was used as the standard. Different concentrations of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were mixed with sodium nitroprusside and incubated. Griess reagent was added to the mixtures and their optical density was read at 546 nm following the method of Marcocci et al. (1994). Ascorbic acid was used as the standard. In order to find the scavenging activity of the extracts against hydroxyl radicals, the method of Kunchandy & Ohkawa (1990) was followed.The superoxide scavenging activity of the methanol, petroleum ether and water extracts of the fruits was deter¬mined by the method of Robak & Gryglewski (1998).Six replicates were maintained in each of the experiments and their SEM was evaluated based on which, non linear regres¬sion (curve fit), exponential growth were derived to calculate the IC50 values of the SAWE and standard compounds. All the statistical analyses were done by using paired t test. The hydroxyl radical scavenging activity of the various extracts of D. indica exhibited IC50 values < 110 ug/ml concentration, the scavenging activity of the extracts of G. penducalata was surprisingly>110 ug/ml.Similarly the oxygen free radical scavenging activity of the different extracts of D. indica exhibited an IC50 value of <110 ug/ml but the methanolic extract of the same exhib¬ited a better free radical scavenging activity compared to that of vitamin C. The methanolic extract of D. indica exhibited an IC50 value better than that of vitamin C. The DPPH scavenging activities of the various extracts of D. indica and G. penducalata were <110 ug/ml but the methanolic extract of D. indica exhibited an IC50 value bet¬ter than that of vitaminc C.The higher amounts of phenolic content in the methanolic extract of D. indica might be one of the major causes for its enhanced in vitro antioxidant activity.The present study concludes that Dillenia indica and Garcinia penducalata both possesses anti oxidant activi¬ties. The anti oxidant activity of Dillenia indica is superior to that of Garcinia penducalata due to its higher phenolic contentKeywords: antioxidants, free radicals, phenolic, scavenging
Procedia PDF Downloads 59541 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 5940 Isolation and Characterization of a Narrow-Host Range Aeromonas hydrophila Lytic Bacteriophage
Authors: Sumeet Rai, Anuj Tyagi, B. T. Naveen Kumar, Shubhkaramjeet Kaur, Niraj K. Singh
Abstract:
Since their discovery, indiscriminate use of antibiotics in human, veterinary and aquaculture systems has resulted in global emergence/spread of multidrug-resistant bacterial pathogens. Thus, the need for alternative approaches to control bacterial infections has become utmost important. High selectivity/specificity of bacteriophages (phages) permits the targeting of specific bacteria without affecting the desirable flora. In this study, a lytic phage (Ahp1) specific to Aeromonas hydrophila subsp. hydrophila was isolated from finfish aquaculture pond. The host range of Ahp1 range was tested against 10 isolates of A. hydrophila, 7 isolates of A. veronii, 25 Vibrio cholerae isolates, 4 V. parahaemolyticus isolates and one isolate each of V. harveyi and Salmonella enterica collected previously. Except the host A. hydrophila subsp. hydrophila strain, no lytic activity against any other bacterial was detected. During the adsorption rate and one-step growth curve analysis, 69.7% of phage particles were able to get adsorbed on host cell followed by the release of 93 ± 6 phage progenies per host cell after a latent period of ~30 min. Phage nucleic acid was extracted by column purification methods. After determining the nature of phage nucleic acid as dsDNA, phage genome was subjected to next-generation sequencing by generating paired-end (PE, 2 x 300bp) reads on Illumina MiSeq system. De novo assembly of sequencing reads generated circular phage genome of 42,439 bp with G+C content of 58.95%. During open read frame (ORF) prediction and annotation, 22 ORFs (out of 49 total predicted ORFs) were functionally annotated and rest encoded for hypothetical proteins. Proteins involved in major functions such as phage structure formation and packaging, DNA replication and repair, DNA transcription and host cell lysis were encoded by the phage genome. The complete genome sequence of Ahp1 along with gene annotation was submitted to NCBI GenBank (accession number MF683623). Stability of Ahp1 preparations at storage temperatures of 4 °C, 30 °C, and 40 °C was studied over a period of 9 months. At 40 °C storage, phage counts declined by 4 log units within one month; with a total loss of viability after 2 months. At 30 °C temperature, phage preparation was stable for < 5 months. On the other hand, phage counts decreased by only 2 log units over a period of 9 during storage at 4 °C. As some of the phages have also been reported as glycerol sensitive, the stability of Ahp1 preparations in (0%, 15%, 30% and 45%) glycerol stocks were also studied during storage at -80 °C over a period of 9 months. The phage counts decreased only by 2 log units during storage, and no significant difference in phage counts was observed at different concentrations of glycerol. The Ahp1 phage discovered in our study had a very narrow host range and it may be useful for phage typing applications. Moreover, the endolysin and holin genes in Ahp1 genome could be ideal candidates for recombinant cloning and expression of antimicrobial proteins.Keywords: Aeromonas hydrophila, endolysin, phage, narrow host range
Procedia PDF Downloads 16339 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers
Authors: M. Sarraf, J. E. Moros, M. C. Sánchez
Abstract:
Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.Keywords: basil seed gum, particle size, viscoelastic properties, whey protein
Procedia PDF Downloads 6638 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 13137 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification
Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho
Abstract:
The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers
Procedia PDF Downloads 1736 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs
Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino
Abstract:
Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus
Procedia PDF Downloads 14035 Pharmacokinetics and Safety of Pacritinib in Patients with Hepatic Impairment and Healthy Volunteers
Authors: Suliman Al-Fayoumi, Sherri Amberg, Huafeng Zhou, Jack W. Singer, James P. Dean
Abstract:
Pacritinib is an oral kinase inhibitor with specificity for JAK2, FLT3, IRAK1, and CSF1R. In clinical studies, pacritinib was well tolerated with clinical activity in patients with myelofibrosis. The most frequent adverse events (AEs) observed with pacritinib are gastrointestinal (diarrhea, nausea, and vomiting; mostly grade 1-2 in severity) and typically resolve within 2 weeks. A human ADME mass balance study demonstrated that pacritinib is predominantly cleared via hepatic metabolism and biliary excretion (>85% of administered dose). The major hepatic metabolite identified, M1, is not thought to materially contribute to the pharmacological activity of pacritinib. Hepatic diseases are known to impair hepatic blood flow, drug-metabolizing enzymes, and biliary transport systems and may affect drug absorption, disposition, efficacy, and toxicity. This phase 1 study evaluated the pharmacokinetics (PK) and safety of pacritinib and the M1 metabolite in study subjects with mild, moderate, or severe hepatic impairment (HI) and matched healthy subjects with normal liver function to determine if pacritinib dosage adjustments are necessary for patients with varying degrees of hepatic insufficiency. Study participants (aged 18-85 y) were enrolled into 4 groups based on their degree of HI as defined by Child-Pugh Clinical Assessment Score: mild (n=8), moderate (n=8), severe (n=4), and healthy volunteers (n=8) matched for age, BMI, and sex. Individuals with concomitant renal dysfunction or progressive liver disease were excluded. A single 400 mg dose of pacritinib was administered to all participants. Blood samples were obtained for PK evaluation predose and at multiple time points postdose through 168 h. Key PK parameters evaluated included maximum plasma concentration (Cmax), time to Cmax (Tmax), area under the plasma concentration time curve (AUC) from hour zero to last measurable concentration (AUC0-t), AUC extrapolated to infinity (AUC0-∞), and apparent terminal elimination half-life (t1/2). Following treatment, pacritinib was quantifiable for all study participants at 1 h through 168 h postdose. Systemic pacritinib exposure was similar between healthy volunteers and individuals with mild HI. However, there was a significant difference between those with moderate and severe HI and healthy volunteers with respect to peak concentration (Cmax) and plasma exposure (AUC0-t, AUC0-∞). Mean Cmax decreased by 47% and 57% respectively in participants with moderate and severe HI vs matched healthy volunteers. Similarly, mean AUC0-t decreased by 36% and 45% and mean AUC0-∞ decreased by 46% and 48%, respectively in individuals with moderate and severe HI vs healthy volunteers. Mean t1/2 ranged from 51.5 to 74.9 h across all groups. The variability on exposure ranged from 17.8% to 51.8% across all groups. Systemic exposure of M1 was also significantly decreased in study participants with moderate or severe HI vs. healthy participants and individuals with mild HI. These changes were not significantly dissimilar from the inter-patient variability in these parameters observed in healthy volunteers. All AEs were grade 1-2 in severity. Diarrhea and headache were the only AEs reported in >1 participant (n=4 each). Based on these observations, it is unlikely that dosage adjustments would be warranted in patients with mild, moderate, or severe HI treated with pacritinib.Keywords: pacritinib, myelofibrosis, hepatic impairment, pharmacokinetics
Procedia PDF Downloads 29934 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 11633 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 24732 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples
Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann
Abstract:
Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide
Procedia PDF Downloads 25931 Biodegradable Cross-Linked Composite Hydrogels Enriched with Small Molecule for Osteochondral Regeneration
Authors: Elena I. Oprita, Oana Craciunescu, Rodica Tatia, Teodora Ciucan, Reka Barabas, Orsolya Raduly, Anca Oancea
Abstract:
Healing of osteochondral defects requires repair of the damaged articular cartilage, the underlying subchondral bone and the interface between these tissues (the functional calcified layer). For this purpose, developing a single monophasic scaffold that can regenerate two specific lineages (cartilage and bone) becomes a challenge. The aim of this work was to develop variants of biodegradable cross-linked composite hydrogel based on natural polypeptides (gelatin), polysaccharides components (chondroitin-4-sulphate and hyaluronic acid), in a ratio of 2:0.08:0.02 (w/w/w) and mixed with Si-hydroxyapatite (Si-Hap), in two ratios of 1:1 and 2:1 (w/w). Si-Hap was synthesized and characterized as a better alternative to conventional Hap. Subsequently, both composite hydrogel variants were cross-linked with (N, N-(3-dimethylaminopropyl)-N-ethyl carbodiimide (EDC) and enriched with a small bioactive molecule (icariin). The small molecule icariin (Ica) (C33H40O15) is the main active constituent (flavonoid) of Herba epimedium used in traditional Chinese medicine to cure bone- and cartilage-related disorders. Ica enhances osteogenic and chondrogenic differentiation of bone marrow mesenchymal stem cells (BMSCs), facilitates matrix calcification and increases the specific extracellular matrix (ECM) components synthesis by chondrocytes. Afterward, the composite hydrogels were characterized for their physicochemical properties in terms of the enzymatic biodegradation in the presence of type I collagenase and trypsin, the swelling capacity and the degree of crosslinking (TNBS assay). The cumulative release of Ica and real-time concentration were quantified at predetermined periods of time, according to the standard curve of standard Ica, after hydrogels incubation in saline buffer at physiological parameters. The obtained cross-linked composite hydrogels enriched with small-molecule Ica were also characterized for morphology by scanning electron microscopy (SEM). Their cytocompatibility was evaluated according to EN ISO 10993-5:2009 standard for medical device testing. Thus, analyses regarding cell viability (Live/Dead assay), cell proliferation (Neutral Red assay) and cell adhesion to composite hydrogels (SEM) were performed using NCTC clone L929 cell line. The final results showed that both cross-linked composite hydrogel variants enriched with Ica presented optimal physicochemical, structural and biological properties to be used as a natural scaffold able to repair osteochondral defects. The data did not reveal any toxicity of composite hydrogels in NCTC stabilized cell lines within the tested range of concentrations. Moreover, cells were capable of spreading and proliferating on both composite hydrogel surfaces. In conclusion, the designed biodegradable cross-linked composites enriched with Si and Ica are recommended for further testing as natural temporary scaffolds, which can allow cell migration and synthesis of new extracellular matrix within osteochondral defects.Keywords: composites, gelatin, osteochondral defect, small molecule
Procedia PDF Downloads 17530 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 23229 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 16828 Detection of Triclosan in Water Based on Nanostructured Thin Films
Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo
Abstract:
Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue
Procedia PDF Downloads 25327 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method
Authors: Natalie Hoyer
Abstract:
Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop
Procedia PDF Downloads 4326 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8825 The Impact of a Simulated Teaching Intervention on Preservice Teachers’ Sense of Professional Identity
Authors: Jade V. Rushby, Tony Loughland, Tracy L. Durksen, Hoa Nguyen, Robert M. Klassen
Abstract:
This paper reports a study investigating the development and implementation of an online multi-session ‘scenario-based learning’ (SBL) program administered to preservice teachers in Australia. The transition from initial teacher education to the teaching profession can present numerous cognitive and psychological challenges for early career teachers. Therefore, the identification of additional supports, such as scenario-based learning, that can supplement existing teacher education programs may help preservice teachers to feel more confident and prepared for the realities and complexities of teaching. Scenario-based learning is grounded in situated learning theory which holds that learning is most powerful when it is embedded within its authentic context. SBL exposes participants to complex and realistic workplace situations in a supportive environment and has been used extensively to help prepare students in other professions, such as legal and medical education. However, comparatively limited attention has been paid to investigating the effects of SBL in teacher education. In the present study, the SBL intervention provided participants with the opportunity to virtually engage with school-based scenarios, reflect on how they might respond to a series of plausible response options, and receive real-time feedback from experienced educators. The development process involved several stages, including collaboration with experienced educators to determine the scenario content based on ‘critical incidents’ they had encountered during their teaching careers, the establishment of the scoring key, the development of the expert feedback, and an extensive review process to refine the program content. The 4-part SBL program focused on areas that can be challenging in the beginning stages of a teaching career, including managing student behaviour and workload, differentiating the curriculum, and building relationships with colleagues, parents, and the community. Results from prior studies implemented by the research group using a similar 4-part format have shown a statistically significant increase in preservice teachers’ self-efficacy and classroom readiness from the pre-test to the final post-test. In the current research, professional teaching identity - incorporating self-efficacy, motivation, self-image, satisfaction, and commitment to teaching - was measured over six weeks at multiple time points: before, during, and after the 4-part scenario-based learning program. Analyses included latent growth curve modelling to assess the trajectory of change in the outcome variables throughout the intervention. The paper outlines (1) the theoretical underpinnings of SBL, (2) the development of the SBL program and methodology, and (3) the results from the study, including the impact of the SBL program on aspects of participating preservice teachers’ professional identity. The study shows how SBL interventions can be implemented alongside the initial teacher education curriculum to help prepare preservice teachers for the transition from student to teacher.Keywords: classroom simulations, e-learning, initial teacher education, preservice teachers, professional learning, professional teaching identity, scenario-based learning, teacher development
Procedia PDF Downloads 7224 Magnetic Carriers of Organic Selenium (IV) Compounds: Physicochemical Properties and Possible Applications in Anticancer Therapy
Authors: E. Mosiniewicz-Szablewska, P. Suchocki, P. C. Morais
Abstract:
Despite the significant progress in cancer treatment, there is a need to search for new therapeutic methods in order to minimize side effects. Chemotherapy, the main current method of treating cancer, is non-selective and has a number of limitations. Toxicity to healthy cells is undoubtedly the biggest problem limiting the use of many anticancer drugs. The problem of how to kill cancer without harming a patient can be solved by using organic selenium (IV) compounds. Organic selenium (IV) compounds are a new class of materials showing a strong anticancer activity. They are first organic compounds containing selenium at the +4 oxidation level and therefore they eliminate the multidrug-resistance for all tumor cell lines tested so far. These materials are capable of selectively killing cancer cells without damaging the healthy ones. They are obtained by the incorporation of selenous acid (H2SeO3) into molecules of fatty acids of sunflower oil and therefore, they are inexpensive to manufacture. Attaching these compounds to magnetic carriers enables their precise delivery directly to the tumor area and the simultaneous application of the magnetic hyperthermia, thus creating a huge opportunity to effectively get rid of the tumor without any side effects. Polylactic-co-glicolic acid (PLGA) nanocapsules loaded with maghemite (-Fe2O3) nanoparticles and organic selenium (IV) compounds are successfully prepared by nanoprecipitation method. In vitro antitumor activity of the nanocapsules were evidenced using murine melanoma (B16-F10), oral squamos carcinoma (OSCC) and murine (4T1) and human (MCF-7) breast lines. Further exposure of these cells to an alternating magnetic field increased the antitumor effect of nanocapsules. Moreover, the nanocapsules presented antitumor effect while not affecting normal cells. Magnetic properties of the nanocapsules were investigated by means of dc magnetization, ac susceptibility and electron spin resonance (ESR) measurements. The nanocapsules presented a typical superparamagnetic behavior around room temperature manifested itself by the split between zero field-cooled/field-cooled (ZFC/FC) magnetization curves and the absence of hysteresis on the field-dependent magnetization curve above the blocking temperature. Moreover, the blocking temperature decreased with increasing applied magnetic field. The superparamagnetic character of the nanocapsules was also confirmed by the occurrence of a maximum in temperature dependences of both real ′(T) and imaginary ′′ (T) components of the ac magnetic susceptibility, which shifted towards higher temperatures with increasing frequency. Additionally, upon decreasing the temperature the ESR signal shifted to lower fields and gradually broadened following closely the predictions for the ESR of superparamagnetoc nanoparticles. The observed superparamagnetic properties of nanocapsules enable their simple manipulation by means of magnetic field gradient, after introduction into the blood stream, which is a necessary condition for their use as magnetic drug carriers. The observed anticancer and superparamgnetic properties show that the magnetic nanocapsules loaded with organic selenium (IV) compounds should be considered as an effective material system for magnetic drug delivery and magnetohyperthermia inductor in antitumor therapy.Keywords: cancer treatment, magnetic drug delivery system, nanomaterials, nanotechnology
Procedia PDF Downloads 20423 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 24722 Enhancing Seismic Resilience in Colombia's Informal Housing: A Low-cost Retrofit Strategy with Buckling-restrained Braces to Protect Vulnerable Communities in Earthquake-prone Regions
Authors: Luis F. Caballero-castro, Dirsa Feliciano, Daniela Novoa, Orlando Arroyo, Jesús D. Villalba-morales
Abstract:
Colombia faces a critical challenge in seismic resilience due to the prevalence of informal housing, which constitutes approximately 70% of residential structures. More than 10 million Colombians (20% of the population), live in homes susceptible to collapse in the event of an earthquake. This, combined with the fact that 83% of the population is in intermediate and high seismic hazard areas, has brought serious consequences to the country. These consequences became evident during the 1999 Armenia earthquake, which affected nearly 100,000 properties and represented economic losses equivalent to 1.88% of that year's Gross Domestic Product (GDP). Despite previous efforts to reinforce informal housing through methods like externally reinforced masonry walls, alternatives related to seismic protection systems (SPDs), such as Buckling-Restrained Braces (BRB), have not yet been explored in the country. BRBs are reinforcement elements capable of withstanding both compression and tension, making them effective in enhancing the lateral stiffness of structures. In this study, the use of low-cost and easily installable BRBs for the retrofit of informal housing in Colombia was evaluated, considering the economic limitations of the communities. For this purpose, a case study was selected involving an informally constructed dwelling in the country, from which field information on its structural characteristics and construction materials was collected. Based on the gathered information, nonlinear models with and without BRBs were created, and their seismic performance was analyzed and compared through incremental static (pushover) and nonlinear dynamic analyses. In the first analysis, the capacity curve was identified, showcasing the sequence of failure events occurring from initial yielding to structural collapse. In the second case, the model underwent nonlinear dynamic analyses using a set of seismic records consistent with the country's seismic hazard. Based on the results, fragility curves were calculated to evaluate the probability of failure of the informal housings before and after the intervention with BRBs, providing essential information about their effectiveness in reducing seismic vulnerability. The results indicate that low-cost BRBs can significantly increase the capacity of informal housing to withstand earthquakes. The dynamic analysis revealed that retrofit structures experienced lower displacements and deformations, enhancing the safety of residents and the seismic performance of informally constructed houses. In other words, the use of low-cost BRBs in the retrofit of informal housing in Colombia is a promising strategy for improving structural safety in seismic-prone areas. This study emphasizes the importance of seeking affordable and practical solutions to address seismic risk in vulnerable communities in earthquake-prone regions in Colombia and serves as a model for addressing similar challenges of informal housing worldwide.Keywords: buckling-restrained braces, fragility curves, informal housing, incremental dynamic analysis, seismic retrofit
Procedia PDF Downloads 9621 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 22320 Peculiarities of Absorption near the Edge of the Fundamental Band of Irradiated InAs-InP Solid Solutions
Authors: Nodar Kekelidze, David Kekelidze, Elza Khutsishvili, Bela Kvirkvelia
Abstract:
The semiconductor devices are irreplaceable elements for investigations in Space (artificial Earth satellite, interplanetary space craft, probes, rockets) and for investigation of elementary particles on accelerators, for atomic power stations, nuclear reactors, robots operating on heavily radiation contaminated territories (Chernobyl, Fukushima). Unfortunately, the most important parameters of semiconductors dramatically worsen under irradiation. So creation of radiation-resistant semiconductor materials for opto and microelectronic devices is actual problem, as well as investigation of complicated processes developed in irradiated solid states. Homogeneous single crystals of InP-InAs solid solutions were grown with zone melting method. There has been studied the dependence of the optical absorption coefficient vs photon energy near fundamental absorption edge. This dependence changes dramatically with irradiation. The experiments were performed on InP, InAs and InP-InAs solid solutions before and after irradiation with electrons and fast neutrons. The investigations of optical properties were carried out on infrared spectrophotometer in temperature range of 10K-300K and 1mkm-50mkm spectral area. Radiation fluencies of fast neutrons was equal to 2·1018neutron/cm2 and electrons with 3MeV, 50MeV up to fluxes of 6·1017electron/cm2. Under irradiation, there has been revealed the exponential type of the dependence of the optical absorption coefficient vs photon energy with energy deficiency. The indicated phenomenon takes place at high and low temperatures as well at impurity different concentration and practically in all cases of irradiation by various energy electrons and fast neutrons. We have developed the common mechanism of this phenomenon for unirradiated materials and implemented the quantitative calculations of distinctive parameter; this is in a satisfactory agreement with experimental data. For the irradiated crystals picture get complicated. In the work, the corresponding analysis is carried out. It has been shown, that in the case of InP, irradiated with electrons (Ф=1·1017el/cm2), the curve of optical absorption is shifted to lower energies. This is caused by appearance of the tails of density of states in forbidden band due to local fluctuations of ionized impurity (defect) concentration. Situation is more complicated in the case of InAs and for solid solutions with composition near to InAs when besides noticeable phenomenon there takes place Burstein effect caused by increase of electrons concentration as a result of irradiation. We have shown, that in certain conditions it is possible the prevalence of Burstein effect. This causes the opposite effect: the shift of the optical absorption edge to higher energies. So in given solid solutions there take place two different opposite directed processes. By selection of solid solutions composition and doping impurity we obtained such InP-InAs, solid solution in which under radiation mutual compensation of optical absorption curves displacement occurs. Obtained result let create on the base of InP-InAs, solid solution radiation-resistant optical materials. Conclusion: It was established the nature of optical absorption near fundamental edge in semiconductor materials and it was created radiation-resistant optical material.Keywords: InAs-InP, electrons concentration, irradiation, solid solutions
Procedia PDF Downloads 20119 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 70