Search results for: artificial intelligence and genetic algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5678

Search results for: artificial intelligence and genetic algorithms

398 Analysis of the Blastocysts Chromosomal Set Obtained after the Use of Donor Oocyte Cytoplasmic Transfer Technology

Authors: Julia Gontar, Natalia Buderatskaya, Igor Ilyin, Olga Parnitskaya, Sergey Lavrynenko, Eduard Kapustin, Ekaterina Ilyina, Yana Lakhno

Abstract:

Introduction: It is well known that oocytes obtained from older reproductive women have accumulated mitochondrial DNA mutations, which negatively affects the morphology of a developing embryo and may lead to the birth of a child with mitochondrial disease. Special techniques have been developed to allow a donor oocyte cytoplasmic transfer with the parents’ biological nuclear DNA retention. At the same time, it is important to understand whether the procedure affects the future embryonic chromosome sets as the nuclear DNA is the transfer subject in this new complex procedure. Material and Methods: From July 2015 to July 2016, the investigation was carried out in the Medical Centre IGR. 34 donor oocytes (group A) were used for the manipulation with the aim of donating cytoplasm: 21 oocytes were used for zygotes pronuclear transfer and oocytes 13 – for the spindle transfer. The mean age of the oocyte donors was 28.4±2.9 years. The procedure was performed using Nikon Ti Eclipse inverted microscope equipped with the micromanipulators Narishige system (Japan), Saturn 3 laser console (UK), Oosight imaging systems (USA). For the preimplantation genetic screening (PGS) blastocyst biopsy was performed, trophectoderm samples were diagnosed using fluorescent in situ hybridization on chromosomes 9, 13, 15, 16, 17, 18, 21, 22, X, Y. For comparison of morphological characteristics and euploidy, was chosen a group of embryos (group B) with the amount of 121 blastocysts obtained from 213 oocytes, which were gotten from the donor programs of assisted reproductive technologies (ART). Group B was not subjected to donor oocyte cytoplasmic transfer procedure and studied on the above mentioned chromosomes. Statistical analysis was carried out using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: After the donor cytoplasm transfer process the amount of the third day developing embryos was 27 (79.4%). In this stage, the group B consisted of 189 (88.7%) developing embryos, and there was no statistically significant difference (SSD) between the two groups (p>0.05). After a comparative analysis of the morphological characteristics of the embryos on the fifth day, we also found no SSD among the studied groups (p>0.05): from 34 oocytes exposed to manipulation, 14 (41.2%) blastocysts was obtained, while the group B blastocyst yield was 56.8% (n=121) from 213 oocytes. The following results were obtained after PGS performing: in group A euploidy in studied chromosomes were 28.6%(n=4) blastocysts, whereas in group B this rate was 40.5%(n=49), 28.6%(n=4) and 21.5%(n=26) of mosaic embryos and 42.8%(n=6) and 38.0%(n=46) aneuploid blastocysts respectively were identified. None of these specified parameters had an SSD (p>0.05). But attention was drawn by the blastocysts in group A with identified mosaicism, which was chaotic without any cell having euploid chromosomal set, in contrast to the mosaic embryos in group B where identified chaotic mosaicism was only 2.5%(n=3). Conclusions: According to the obtained results, there is no direct procedural effect on the chromosome in embryos obtained following donor oocyte cytoplasmic transfer. Thus, the technology introduction will enhance the infertility treating effectiveness as well as avoiding having a child with mitochondrial disease.

Keywords: donor oocyte cytoplasmic transfer, embryos’ chromosome set, oocyte spindle transfer, pronuclear transfer

Procedia PDF Downloads 328
397 Selection of Suitable Reference Genes for Assessing Endurance Related Traits in a Native Pony Breed of Zanskar at High Altitude

Authors: Prince Vivek, Vijay K. Bharti, Manishi Mukesh, Ankita Sharma, Om Prakash Chaurasia, Bhuvnesh Kumar

Abstract:

High performance of endurance in equid requires adaptive changes involving physio-biochemical, and molecular responses in an attempt to regain homeostasis. We hypothesized that the identification of the suitable reference genes might be considered for assessing of endurance related traits in pony at high altitude and may ensure for individuals struggling to potent endurance trait in ponies at high altitude. A total of 12 mares of ponies, Zanskar breed, were divided into three groups, group-A (without load), group-B, (60 Kg) and group-C (80 Kg) on backpack loads were subjected to a load carry protocol, on a steep climb of 4 km uphill, and of gravel, uneven rocky surface track at an altitude of 3292 m to 3500 m (endpoint). Blood was collected before and immediately after the load carry on sodium heparin anticoagulant, and the peripheral blood mononuclear cell was separated for total RNA isolation and thereafter cDNA synthesis. Real time-PCR reactions were carried out to evaluate the mRNAs expression profile of a panel of putative internal control genes (ICGs), related to different functional classes, namely glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β₂ microglobulin (β₂M), β-actin (ACTB), ribosomal protein 18 (RS18), hypoxanthine-guanine phosophoribosyltransferase (HPRT), ubiquitin B (UBB), ribosomal protein L32 (RPL32), transferrin receptor protein (TFRC), succinate dehydrogenase complex subunit A (SDHA) for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of native pony’s. Three different algorithms, geNorm, NormFinder, and BestKeeper software, were used to evaluate the stability of reference genes. The result showed that GAPDH was best stable gene and stability value for the best combination of two genes was observed TFRC and β₂M. In conclusion, the geometric mean of GAPDH, TFRC and β₂M might be used for accurate normalization of transcriptional data for assessing endurance related traits in Zanskar ponies during load carrying.

Keywords: endurance exercise, ubiquitin B (UBB), β₂ microglobulin (β₂M), high altitude, Zanskar ponies, reference gene

Procedia PDF Downloads 131
396 Production of Bacillus Lipopeptides for Biocontrol of Postharvest Crops

Authors: Vivek Rangarajan, Kim G. Klarke

Abstract:

With overpopulation threatening the world’s ability to feed itself, food production and protection has become a major issue, especially in developing countries. Almost one-third of the food produced for human consumption, around 1.3 billion tonnes, is either wasted or lost annually. Postharvest decay in particular constitutes a major cause of crop loss with about 20% of fruits and vegetables produced lost during postharvest storage, mainly due to fungal disease. Some of the major phytopathogenic fungi affecting postharvest fruit crops in South Africa include Aspergillus, Botrytis, Penicillium, Alternaria and Sclerotinia spp. To date control of fungal phytopathogens has primarily been dependent on synthetic chemical fungicides, but these chemicals pose a significant threat to the environment, mainly due to their xenobiotic properties and tendency to generate resistance in the phytopathogens. Here, an environmentally benign alternative approach to control postharvest fungal phytopathogens in perishable fruit crops has been presented, namely the application of a bio-fungicide in the form of lipopeptide molecules. Lipopeptides are biosurfactants produced by Bacillus spp. which have been established as green, nontoxic and biodegradable molecules with antimicrobial properties. However, since the Bacillus are capable of producing a large number of lipopeptide homologues with differing efficacies against distinct target organisms, the lipopeptide production conditions and strategy are critical to produce the maximum lipopeptide concentration with homologue ratios to specification for optimum bio-fungicide efficacy. Process conditions, and their impact on Bacillus lipopeptide production, were evaluated in fully instrumented laboratory scale bioreactors under well-regulated controlled and defined environments. Factors such as the oxygen availability and trace element and nitrate concentrations had profound influences on lipopeptide yield, productivity and selectivity. Lipopeptide yield and homologue selectivity were enhanced in cultures where the oxygen in the sparge gas was increased from 21 to 30 mole%. The addition of trace elements, particularly Fe2+, increased the total concentration of lipopeptides and a nitrate concentration equivalent to 8 g/L ammonium nitrate resulted in optimum lipopeptide yield and homologue selectivity. Efficacy studies of the culture supernatant containing the crude lipopeptide mixture were conducted using phytopathogens isolated from fruit in the field, identified using genetic sequencing. The supernatant exhibited antifungal activity against all the test-isolates, namely Lewia, Botrytis, Penicillium, Alternaria and Sclerotinia spp., even in this crude form. Thus the lipopeptide product efficacy has been confirmed to control the main diseases, even in the basic crude form. Future studies will be directed towards purification of the lipopeptide product and enhancement of efficacy.

Keywords: antifungal efficacy, biocontrol, lipopeptide production, perishable crops

Procedia PDF Downloads 404
395 Antimicrobial Resistance of Acinetobacter baumannii in Veterinary Settings: A One Health Perspective from Punjab, Pakistan

Authors: Minhas Alam, Muhammad Hidayat Rasool, Mohsin Khurshid, Bilal Aslam

Abstract:

The genus Acinetobacter has emerged as a significant concern in hospital-acquired infections, particularly due to the versatility of Acinetobacter baumannii in causing nosocomial infections. The organism's remarkable metabolic adaptability allows it to thrive in various environments, including the environment, animals, and humans. However, the extent of antimicrobial resistance in Acinetobacter species from veterinary settings, especially in developing countries like Pakistan, remains unclear. This study aimed to isolate and characterize Acinetobacter spp. from veterinary settings in Punjab, Pakistan. A total of 2,230 specimens were collected, including 1,960 samples from veterinary settings (nasal and rectal swabs from dairy and beef cattle), 200 from the environment, and 70 from human clinical settings. Isolates were identified using routine microbiological procedures and confirmed by polymerase chain reaction (PCR). Antimicrobial susceptibility was determined by the disc diffusion method, and minimum inhibitory concentration (MIC) was measured by the micro broth dilution method. Molecular techniques, such as PCR and DNA sequencing, were used to screen for antimicrobial-resistant determinants. Genetic diversity was assessed using standard techniques. The results showed that the overall prevalence of A. baumannii in cattle was 6.63% (65/980). However, among cattle, a higher prevalence of A. baumannii was observed in dairy cattle, 7.38% (54/731), followed by beef cattle, 4.41% (11/249). Out of 65 A. baumannii isolates, the carbapenem resistance was found in 18 strains, i.e. 27.7%. The prevalence of A. baumannii in nasopharyngeal swabs was higher, i.e., 87.7% (57/65), as compared to rectal swabs, 12.3% (8/65). Class D β-lactamases genes blaOXA-23 and blaOXA-51 were present in all the CRAB from cattle. Among carbapenem-resistant isolates, 94.4% (17/18) were positive for class B β-lactamases gene blaIMP, whereas the blaNDM-1 gene was detected in only one isolate of A. baumannii. Among 70 clinical isolates of A. baumannii, 58/70 (82.9%) were positive for the blaOXA-23-like gene, and 87.1% (61/70) were CRAB isolates. Among all clinical isolates of A. baumannii, blaOXA-51-like gene was present. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 82.85% of clinical isolates. From the environmental settings, a total of 18 A. baumannii isolates were recovered; among these, 38.88% (7/18) strains showed carbapenem resistance. All environmental isolates of A. baumannii harbored class D β-lactamases genes, i.e., blaOXA-51 and blaOXA-23 were detected in 38.9% (7/18) isolates. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 38.88% of isolates. From environmental settings, 18 A. baumannii isolates were recovered, with 38.88% showing carbapenem resistance. All environmental isolates harbored blaOXA-51 and blaOXA-23 genes, with co-existence in 38.88% of isolates. MLST results showed ten different sequence types (ST) in clinical isolates, with ST 589 being the most common in carbapenem-resistant isolates. In veterinary isolates, ST2 was most common in CRAB isolates from cattle. Immediate control measures are needed to prevent the transmission of CRAB isolates among animals, the environment, and humans. Further studies are warranted to understand the mechanisms of antibiotic resistance spread and implement effective disease control programs.

Keywords: Acinetobacter baumannii, carbapenemases, drug resistance, MSLT

Procedia PDF Downloads 71
394 In vivo Estimation of Mutation Rate of the Aleutian Mink Disease Virus

Authors: P.P. Rupasinghe, A.H. Farid

Abstract:

The Aleutian mink disease virus (AMDV, Carnivore amdoparvovirus 1) causes persistent infection, plasmacytosis, and formation and deposition of immune complexes in various organs in adult mink, leading to glomerulonephritis, arteritis and sometimes death. The disease has no cure nor an effective vaccine, and identification and culling of mink positive for anti-AMDV antibodies have not been successful in controlling the infection in many countries. The failure to eradicate the virus from infected farms may be caused by keeping false-negative individuals on the farm, virus transmission from wild animals, or neighboring farms. The identification of sources of infection, which can be performed by comparing viral sequences, is important in the success of viral eradication programs. High mutation rates could cause inaccuracies when viral sequences are used to trace back an infection to its origin. There is no published information on the mutation rate of AMDV either in vivo or in vitro. The in vivo estimation is the most accurate method, but it is difficult to perform because of the inherent technical complexities, namely infecting live animals, the unknown numbers of viral generations (i.e., infection cycles), the removal of deleterious mutations over time and genetic drift. The objective of this study was to determine the mutation rate of AMDV on which no information was available. A homogenate was prepared from the spleen of one naturally infected American mink (Neovison vison) from Nova Scotia, Canada (parental template). The near full-length genome of this isolate (91.6%, 4,143 bp) was bidirectionally sequenced. A group of black mink was inoculated with this homogenate (descendant mink). Spleen sampled were collected from 10 descendant mink after 16 weeks post-inoculation (wpi) and from anther 10 mink after 176 wpi, and their near-full length genomes were bi-directionally sequenced. Sequences of these mink were compared with each other and with the sequence of the parental template. The number of nucleotide substitutions at 176 wpi was 3.1 times greater than that at 16 wpi (113 vs 36) whereas the estimates of mutation rate at 176 wpi was 3.1 times lower than that at 176 wpi (2.85×10-3 vs 9.13×10-4 substitutions/ site/ year), showing a decreasing trend in the mutation rate per unit of time. Although there is no report on in vivo estimate of the mutation rate of DNA viruses in animals using the same method which was used in the current study, these estimates are at the higher range of reported values for DNA viruses determined by various techniques. These high estimates are logical based on the wide range of diversity and pathogenicity of AMDV isolates. The results suggest that increases in the number of nucleotide substitutions over time and subsequent divergence make it difficult to accurately trace back AMDV isolates to their origin when several years elapsed between the two samplings.

Keywords: Aleutian mink disease virus, American mink, mutation rate, nucleotide substitution

Procedia PDF Downloads 125
393 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 143
392 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 129
391 Switchable Lipids: From a Molecular Switch to a pH-Sensitive System for the Drug and Gene Delivery

Authors: Jeanne Leblond, Warren Viricel, Amira Mbarek

Abstract:

Although several products have reached the market, gene therapeutics are still in their first stages and require optimization. It is possible to improve their lacking efficiency by the use of carefully engineered vectors, able to carry the genetic material through each of the biological barriers they need to cross. In particular, getting inside the cell is a major challenge, because these hydrophilic nucleic acids have to cross the lipid-rich plasmatic and/or endosomal membrane, before being degraded into lysosomes. It takes less than one hour for newly endocytosed liposomes to reach highly acidic lysosomes, meaning that the degradation of the carried gene occurs rapidly, thus limiting the transfection efficiency. We propose to use a new pH-sensitive lipid able to change its conformation upon protonation at endosomal pH values, leading to the disruption of the lipidic bilayer and thus to the fast release of the nucleic acids into the cytosol. It is expected that this new pH-sensitive mechanism promote endosomal escape of the gene, thereby its transfection efficiency. The main challenge of this work was to design a preparation presenting fast-responding lipidic bilayer destabilization properties at endosomal pH 5 while remaining stable at blood pH value and during storage. A series of pH-sensitive lipids able to perform a conformational switch upon acidification were designed and synthesized. Liposomes containing these switchable lipids, as well as co-lipids were prepared and characterized. The liposomes were stable at 4°C and pH 7.4 for several months. Incubation with siRNA led to the full entrapment of nucleic acids as soon as the positive/negative charge ratio was superior to 2. The best liposomal formulation demonstrated a silencing efficiency up to 10% on HeLa cells, very similar to a commercial agent, with a lowest toxicity than the commercial agent. Using flow cytometry and microscopy assays, we demonstrated that drop of pH was required for the transfection efficiency, since bafilomycin blocked the transfection efficiency. Additional evidence was brought by the synthesis of a negative control lipid, which was unable to switch its conformation, and consequently exhibited no transfection ability. Mechanistic studies revealed that the uptake was mediated through endocytosis, by clathrin and caveolae pathways, as reported for previous lipid nanoparticle systems. This potent system was used for the treatment of hypercholesterolemia. The switchable lipids were able to knockdown PCSK9 expression on human hepatocytes (Huh-7). Its efficiency is currently evaluated on in vivo mice model of PCSK9 KO mice. In summary, we designed and optimized a new cationic pH-sensitive lipid for gene delivery. Its transfection efficiency is similar to the best available commercial agent, without the usually associated toxicity. The promising results lead to its use for the treatment of hypercholesterolemia on a mice model. Anticancer applications and pulmonary chronic disease are also currently investigated.

Keywords: liposomes, siRNA, pH-sensitive, molecular switch

Procedia PDF Downloads 204
390 Occupational Safety and Health in the Wake of Drones

Authors: Hoda Rahmani, Gary Weckman

Abstract:

The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.

Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition

Procedia PDF Downloads 209
389 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% + Finasteride 0.1% Using Patch Test

Authors: Joshi Rajiv, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog

Abstract:

Topical formulation containing minoxidil and finasteride helps hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare the irritation potential of three conventional formulations of minoxidil 5% + finasteride 0.1% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 53 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulations of minoxidil 5% + finasteride 0.1% topical solution. Products tested included aqueous based minoxidil 5% + finasteride 0.1% (AnasureTM-F, Sun Pharma, India – Brand A), lipid based minoxidil 5% + finasteride 0.1% (Brand B) and aqueous based minoxidil 5% + finasteride 0.1% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24 hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using the Draize scale (0-4 points scale for erythema/dryness//wrinkles and for oedema). Follow-up was scheduled after one week to confirm recovery for any reaction. A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. The procedure of the patch test followed the principles outlined by the Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty three subjects with mean age 31.9 years (25 males and 28 females) participated in the study. The combined mean score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.92 ± 0.47 (positive control) and 0.0 ± 0.0 (Negative control). This means the score of Brand A (Sun Pharma product) was significantly lower than that of Brand B (p=0.001) and that of Brand C (p=0.001). The combined mean erythema score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.09 ± 0.4 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.001) and that of Brand C (p=0.001). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulations of minoxidil 5% + finasteride 0.1% were non-irritant. Brand A of minoxidil 5% + finasteride 0.1% (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011:2018

Keywords: erythema, finasteride, irritation, minoxidil, patch test

Procedia PDF Downloads 84
388 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 2% Using Patch Test

Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog

Abstract:

Introduction: Minoxidil has been used topically for a long time to assist hair growth in the management of male androgenetic alopecia. The aim of this study was a comparative assessment of the irritation potential of three commercial formulations of minoxidil 2% topical solution in a human patch test. Methodology: The study was a non-randomized, double-blind, controlled, single-center study of 56 healthy adult Indian subjects. A 24-hour occlusive patch test was conducted with three formulations of minoxidil 2% topical solution. Products tested were aqueous-based minoxidil 2% (AnasureTM 2%, Sun Pharma, India – Brand A), alcohol-based minoxidil 2% (Brand B) and aqueous-based minoxidil 2% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate as a negative and positive control, respectively, were included. Patches were applied on the back, followed by removal after 24 hours. The Draize scale (0-4 points scale for erythema/dryness/wrinkles and for oedema) was used to evaluate and clinically score the skin reaction under constant artificial daylight 24 hours after the removal of the patches. The patch test was based on the principles outlined by Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). A mean combined score up to 2.0/8.0 indicates that a product is “non-irritant,” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. In case of any skin reaction that was observed, a follow-up was planned after one week to confirm recovery. Results: The 56 subjects who participated in the study had a mean age of 28.7 years (28 males and 28 females). The combined mean score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.29± 0.53 (Brand B), 0.30 ± 0.46 (Brand C), 3.25 ± 0.77 (positive control) and 0.02 ± 0.13 (negative control). This mean score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.016) and that of Brand C (p=0.004). The mean erythema score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.27 ± 0.49 (Brand B), 0.30 ± 0.46 (Brand C), 2.5 ± 0.66 (positive control) and 0.02 ± 0.13 (negative control). The mean erythema score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.019) and that of Brand C (p=0.004). Reactions that were observed 24 hours after patch removal subsided in a week’s time. Conclusion: Based on the human patch test as per the BIS, IS 4011:2018, all the three topical formulations of minoxidil 2% were found to be non-irritant. Brand A of 2% minoxidil (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score.

Keywords: erythema, irritation, minoxidil, patch test

Procedia PDF Downloads 82
387 A Cooperative, Autonomous, and Continuously Operating Drone System Offered to Railway and Bridge Industry: The Business Model Behind

Authors: Paolo Guzzini, Emad Samuel M. Ebeid

Abstract:

Bridges and Railways are critical infrastructures. Ensuring safety for transports using such assets is a primary goal as it directly impacts the lives of people. By the way, improving safety could require increased investments in O&M, and therefore optimizing resource usage for asset maintenance becomes crucial. Drones4Safety (D4S), a European project funded under the H2020 Research and Innovation Action (RIA) program, aims to increase the safety of the European civil transport by building a system that relies on 3 main pillars: • Drones operating autonomously in swarm mode; • Drones able to recharge themselves using inductive phenomena produced by transmission lines in the nearby of bridges and railways assets to be inspected; • Data acquired that are analyzed with AI-empowered algorithms for defect detection This paper describes the business model behind this disruptive project. The Business Model is structured in 2 parts: • The first part is focused on the design of the business model Canvas, to explain the value provided by the Drone4safety project; • The second part aims at defining a detailed financial analysis, with the target of calculating the IRR (Internal Return rate) and the NPV (Net Present Value) of the investment in a 7 years plan (2 years to run the project + 5 years post-implementation). As to the financial analysis 2 different points of view are assumed: • Point of view of the Drones4safety company in charge of designing, producing, and selling the new system; • Point of view of the Utility company that will adopt the new system in its O&M practices; Assuming the point of view of the Drones4safety company 3 scenarios were considered: • Selling the drones > revenues will be produced by the drones’ sales; • Renting the drones > revenues will be produced by the rental of the drones (with a time-based model); • Selling the data acquisition service > revenues will be produced by the sales of pictures acquired by drones; Assuming the point of view of a utility adopting the D4S system, a 4th scenario was analyzed taking into account the decremental costs related to the change of operation and maintenance practices. The paper will show, for both companies, what are the key parameters affecting most of the business model and which are the sustainable scenarios.

Keywords: a swarm of drones, AI, bridges, railways, drones4safety company, utility companies

Procedia PDF Downloads 141
386 Corrosion Study of Magnetically Driven Components in Spinal Implants by Immersion Testing in Simulated Body Fluids

Authors: Benjawan Saengwichian, Alasdair E. Charles, Philip J. Hyde

Abstract:

Magnetically controlled growing rods (MCGRs) have been used to stabilise and correct spinal curvature in children to support non-invasive scoliosis adjustment. Although the encapsulated driving components are intended to be isolated from body fluid contact, in vivo corrosion was observed on these components due to sealing mechanism damage. Consequently, a corrosion circuit is created with the body fluids, resulting in malfunction of the lengthening mechanism. Particularly, the chloride ions in blood plasma or cerebrospinal fluid (CSF) may corrode the MCGR alloys, possibly resulting in metal ion release in long-term use. However, there is no data available on the corrosion resistance of spinal implant alloys in CSF. In this study, an in vitro immersion configuration was designed to simulate in vivo corrosion of 440C SS-Ti6Al4V couples. The 440C stainless steel (SS) was heat-treated to investigate the effect of tempering temperature on intergranular corrosion (IGC), while crevice and galvanic corrosion were studied by limiting the clearance of dissimilar couples. Tests were carried out in a neutral artificial cerebrospinal fluid (ACSF) and phosphate-buffered saline (PBS) under aeration and deaeration for 2 months. The composition of the passive films and metal ion release were analysed. The effect of galvanic coupling, pH, dissolved oxygen and anion species on corrosion rates and corrosion mechanisms are discussed based on quantitative and qualitative measurements. The results suggest that ACSF is more aggressive than PBS due to the combination of aggressive chlorides and sulphate anions, while phosphate in PBS acts as an inhibitor to delay corrosion. The presence of Vivianite on the SS surface in PBS lowered the corrosion rate (CR) more than 5 times for aeration and nearly 2 times for deaeration, compared with ACSF. The CR of 440C is dependent on passive film properties varied by tempering temperature and anion species. Although the CR of Ti6Al4V is insignificant, it tends to release more Ti ions in deaerated ACSF than under aeration, about 6 µg/L. It seems the crevice-like design has more effect on macroscopic corrosion than combining the dissimilar couple, whereas IGC is dominantly observed on sensitized microstructure.

Keywords: cerebrospinal fluid, crevice corrosion, intergranular corrosion, magnetically controlled growing rods

Procedia PDF Downloads 129
385 Geographic Variation in the Baseline Susceptibility of Helicoverpa armigera (Hubner) (Noctuidae: Lepidoptera) Field Populations to Bacillus thuringiensis Cry Toxins for Resistance Monitoring

Authors: Muhammad Arshad, M. Sufian, Muhammad D. Gogi, A. Aslam

Abstract:

The transgenic cotton expressing Bacillus thuringiensis (Bt) provides an effective control of Helicoverpa armigera, a most damaging pest of the cotton crop. However, Bt cotton may not be the optimal solution owing to the selection pressure of Cry toxins. As Bt cotton express the insecticidal proteins throughout the growing seasons, there are the chances of resistance development in the target pests. A regular monitoring and surveillance of target pest’s baseline susceptibility to Bt Cry toxins is crucial for early detection of any resistance development. The present study was conducted to monitor the changes in the baseline susceptibility of the field population of H. armigera to Bt Cry1Ac toxin. The field-collected larval populations were maintained in the laboratory on artificial diet and F1 generation larvae were used for diet incorporated diagnostic studies. The LC₅₀ and MIC₅₀ were calculated to measure the level of resistance of population as a ratio over susceptible population. The monitoring results indicated a significant difference in the susceptibility (LC₅₀) of H. armigera for first, second, third and fourth instar larval populations sampled from different cotton growing areas over the study period 2016-17. The variations in susceptibility among the tested insects depended on the age of the insect and susceptibility decreased with the age of larvae. The overall results show that the average resistant ratio (RR) of all field-collected populations (FSD, SWL, MLT, BWP and DGK) exposed to Bt toxin Cry1Ac ranged from 3.381-fold to 7.381-fold for 1st instar, 2.370-fold to 3.739-fold for 2nd instar, 1.115-fold to 1.762-fold for 3rd instar and 1.141-fold to 2.504-fold for 4th instar, depicting maximum RR from MLT population, whereas minimum RR for FSD and SWL population. The results regarding moult inhibitory concentration of H. armigera larvae (1-4th instars) exposed to different concentrations of Bt Cry1Ac toxin indicated that among all field populations, overall Multan (MLT) and Bahawalpur (BWP) populations showed higher MIC₅₀ values as compared to Faisalabad (FSD) and Sahiwal (SWL), whereas DG Khan (DGK) population showed an intermediate moult inhibitory concentrations. This information is important for the development of more effective resistance monitoring programs. The development of Bt Cry toxins baseline susceptibility data before the widespread commercial release of transgenic Bt cotton cultivars in Pakistan is important for the development of more effective resistance monitoring programs to identify the resistant H. armigera populations.

Keywords: Bt cotton, baseline, Cry1Ac toxins, H. armigera

Procedia PDF Downloads 141
384 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 146
383 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 282
382 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 234
381 Endodontic Pretreatments, Clinical Opportunities and Challenges

Authors: Ilma Robo, Manola Kelmendi, Saimir Heta, Megi Tafa, Vera Ostreni

Abstract:

Preservation of a natural tooth, even if endodontically treated, is more indicated than its replacement with an artificial tooth placed in prosthetic ways or with implant treatment. It is known how technology and endodontic treatment procedures have evolved significantly. It is also known that significant developments have been made in both dental prostheses and implant treatments, and again, in both specialties, it is emphasized that both the tooth placed with dental prostheses and the tooth placed with implant treatment cannot replace the natural tooth. The issue is whether long-term periapical tissue healing is achieved after a successful endodontic treatment, and for this, clinical data should be collected. In the cases when the apical closure or "apical filling" with the endodontic filling was carried out correctly clinically, but for various reasons, the healing of the periapical tissues did not occur, but also for those cases when the endodontic treatment did not reach the "apical filling" of the root canal. Teeth Endodontic retreatments have their clinical difficulty, but knowing the reason why endodontic treatment success has not been achieved clinically, the clinical endodontic approach is easier. In this process, it is important for the dentist to recognize the clinical and radiographic signs of persistent apical periodontitis or renewed apical periodontitis. After this initial procedure, dentists must know and evaluate the possibility of clinical endodontic retreatment by reporting, not precisely, but with very approximate values, the percentage of clinical success of endodontic retreatment. Depending on the reason for the performance, endodontic re-treatment may also need more specialized equipment or tools, for which even the professional who undertakes the re-treatment must be equipped with the relevant knowledge of their use and clinical application. Evaluating the clinical success of endodontic re-treatment is actually a more difficult process and requires more clinical responsibility since it must be considered that the initial treatment was performed by the same specialist as the specialist who undertakes the same endodontic re-treatment. Tooth So, the clinical endodontic re-treatment of a tooth should not be seen as a fund of clinical practice only of a good successful endodontist, but as part of routine endodontic treatments, nor should it be seen as a typical case where the tools and the most advanced technological devices in the endodontic field. So, the clinical picture of endodontic re-treatments offers the possibility of finding endodontic malpractice, the possibility of more accurate assessment of dental morphological anomalies, and above all, the cognitive and professional possibilities of the diagnosis of persistent apical periodontitis. This study offers the possibility of evaluating these three directions by presenting in numbers and in percentage the frequency of the reasons why the endodontic success of the root canal treatment is not always achieved.

Keywords: apical periodontitis, clinical susccess, endodontics, E.faecalis

Procedia PDF Downloads 8
380 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis

Authors: Sakshi Piplani, Ajit Kumar

Abstract:

Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.

Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid

Procedia PDF Downloads 250
379 Extraction of Scandium (Sc) from an Ore with Functionalized Nanoporous Silicon Adsorbent

Authors: Arezoo Rahmani, Rinez Thapa, Juha-Matti Aalto, Petri Turhanen, Jouko Vepsalainen, Vesa-PekkaLehto, Joakim Riikonen

Abstract:

Production of Scandium (Sc) is a complicated process because Sc is found only in low concentrations in ores and the concentration of Sc is very low compared with other metals. Therefore, utilization of typical extraction processes such as solvent extraction is problematic in scandium extraction. The Adsorption/desorption method can be used, but it is challenging to prepare materials, which have good selectivity, high adsorption capacity, and high stability. Therefore, efficient and environmentally friendly methods for Sc extraction are needed. In this study, the nanoporous composite material was developed for extracting Sc from an Sc ore. The nanoporous composite material offers several advantageous properties such as large surface area, high chemical and mechanical stability, fast diffusion of the metals in the material and possibility to construct a filter out of the material with good flow-through properties. The nanoporous silicon material was produced by first stabilizing the surfaces with a silicon carbide layer and then functionalizing the surface with bisphosphonates that act as metal chelators. The surface area and porosity of the material were characterized by N₂ adsorption and the morphology was studied by scanning electron microscopy (SEM). The bisphosphonate content of the material was studied by thermogravimetric analysis (TGA). The concentration of metal ions in the adsorption/desorption experiments was measured with inductively coupled plasma mass spectrometry (ICP-MS). The maximum capacity of the material was 25 µmol/g Sc at pH=1 and 45 µmol/g Sc at pH=3, obtained from adsorption isotherm. The selectivity of the material towards Sc in artificial solutions containing several metal ions was studied at pH one and pH 3. The result shows good selectivity of the nanoporous composite towards adsorption of Sc. Scandium was less efficiently adsorbed from solution leached from the ore of Sc because of excessive amounts of iron (Fe), aluminum (Al) and titanium (Ti) which disturbed the adsorption process. For example, the concentration of Fe was more than 4500 ppm, while the concentration of Sc was only three ppm, approximately 1500 times lower. Precipitation methods were developed to lower the concentration of the metals other than Sc. Optimal pH for precipitation was found to be pH 4. The concentration of Fe, Al and Ti were decreased by 99, 70, 99.6%, respectively, while the concentration of Sc decreased only 22%. Despite the large reduction in the concentration of other metals, more work is needed to further increase the relative concentration of Sc compared with other metals to efficiently extract it using the developed nanoporous composite material. Nevertheless, the developed material may provide an affordable, efficient and environmentally friendly method to extract Sc on a large scale.

Keywords: adsorption, nanoporous silicon, ore solution, scandium

Procedia PDF Downloads 146
378 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 75
377 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 23
376 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
375 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis

Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo

Abstract:

Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.

Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines

Procedia PDF Downloads 141
374 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 86
373 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 413
372 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 259
371 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 54
370 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 262
369 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 39