Search results for: modified simplex algorithm
855 Use of Pig as an Animal Model for Assessing the Differential MicroRNA Profiling in Kidney after Aristolochic Acid Intoxication
Authors: Daniela E. Marin, Cornelia Braicu, Gina C. Pistol, Roxana Cojocneanu-Petric, Ioana Berindan Neagoe, Mihail A. Gras, Ionelia Taranu
Abstract:
Aristolochic acid (AA) is a carcinogenic, mutagenic, and nephrotoxic compound commonly found in the Aristolochiaceae family of plants. AA is frequently associated with urothelial carcinoma of the upper urinary tract in human and animals and is considered as being responsible for Balkan Endemic Nephropathy. The pig provides a good animal model because the porcine urological system is very similar to that of humans, both in aspects of physiology and anatomy. MicroRNA (miRNA) are small non-coding RNAs that have an impact on a wide range of biological processes by regulating gene expression at post-transcriptional level. The objective of this study was to analyze the miRNA profiling in the kidneys of AA intoxicated swine. For this purpose, ten TOPIGS-40 crossbred weaned piglets, 4-week-old, male and females with an initial average body weight of 9.83 ± 0.5 kg were studied for 28 days. They were given ad libitum access to water and feed and randomly allotted to one of the following groups: control group (C) or aristolochic acid group (AA). They were fed a maize-soybean-meal-based diet contaminated or not with 0.25mgAA/kg. To profile miRNA in the kidneys of pigs, microarrays and bioinformatics approaches were applied to analyze the miRNA in the kidney of control and AA intoxicated pigs. After normalization, our results have shown that a total of 5 known miRNAs and 4 novel miRNAs had different profiling in the kidney of intoxicated animals versus control ones. Expression of miR-32-5p, miR-497-5p, miR-423-3p, miR-218-5p, miR-128-3p were up-regulated by 0.25mgAA/kg feed, while the expression of miR-9793-5p, miR-9835-3p, miR-9840-3p, miR-4334-5p was down-regulated. The microRNA profiling in kidney of intoxicated animals was associated with modified expression of target genes as: RICTOR, LASP1, SFRP2, DKK2, BMI1, RAF1, IGF1R, MAP2K1, WEE1, HDGF, BCL2, EIF4E etc, involved in cell division cycle, apoptosis, cell differentiation and cell migration, cell signaling, cancer etc. In conclusion, this study provides new data concerning the microRNA profiling in kidney after aristolochic acid intoxications with important implications for human and animal health.Keywords: aristolochic acid, kidney, microRNA, swine
Procedia PDF Downloads 285854 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments
Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles
Abstract:
This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal
Procedia PDF Downloads 42853 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies
Procedia PDF Downloads 371852 Identification of Watershed Landscape Character Types in Middle Yangtze River within Wuhan Metropolitan Area
Authors: Huijie Wang, Bin Zhang
Abstract:
In China, the middle reaches of the Yangtze River are well-developed, boasting a wealth of different types of watershed landscape. In this regard, landscape character assessment (LCA) can serve as a basis for protection, management and planning of trans-regional watershed landscape types. For this study, we chose the middle reaches of the Yangtze River in Wuhan metropolitan area as our study site, wherein the water system consists of rich variety in landscape types. We analyzed trans-regional data to cluster and identify types of landscape characteristics at two levels. 55 basins were analyzed as variables with topography, land cover and river system features in order to identify the watershed landscape character types. For watershed landscape, drainage density and degree of curvature were specified as special variables to directly reflect the regional differences of river system features. Then, we used the principal component analysis (PCA) method and hierarchical clustering algorithm based on the geographic information system (GIS) and statistical products and services solution (SPSS) to obtain results for clusters of watershed landscape which were divided into 8 characteristic groups. These groups highlighted watershed landscape characteristics of different river systems as well as key landscape characteristics that can serve as a basis for targeted protection of watershed landscape characteristics, thus helping to rationally develop multi-value landscape resources and promote coordinated development of trans-regions.Keywords: GIS, hierarchical clustering, landscape character, landscape typology, principal component analysis, watershed
Procedia PDF Downloads 233851 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 248850 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings
Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen
Abstract:
After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity
Procedia PDF Downloads 115849 Evaluation of Mito-Uncoupler Induced Hyper Metabolic and Aggressive Phenotype in Glioma Cells
Authors: Yogesh Rai, Saurabh Singh, Sanjay Pandey, Dhananjay K. Sah, B. G. Roy, B. S. Dwarakanath, Anant N. Bhatt
Abstract:
One of the most common signatures of highly malignant gliomas is their capacity to metabolize more glucose to lactic acid than normal brain tissues, even under normoxic conditions (Warburg effect), indicating that aerobic glycolysis is constitutively upregulated through stable genetic or epigenetic changes. However, oxidative phosphorylation (OxPhos) is also required to maintain the mitochondrial membrane potential for tumor cell survival. In the process of tumorigenesis, tumor cells during fastest growth rate exhibit both high glycolytic and high OxPhos. Therefore, metabolically reprogrammed cancer cells with combination of both aerobic glycolysis and altered OxPhos develop a robust metabolic phenotype, which confers a selective growth advantage. In our study, we grew the high glycolytic BMG-1 (glioma) cells with continuous exposure of mitochondrial uncoupler 2, 4, dinitro phenol (DNP) for 10 passages to obtain a phenotype of high glycolysis with enhanced altered OxPhos. We found that OxPhos modified BMG (OPMBMG) cells has similar growth rate and cell cycle distribution but high mitochondrial mass and functional enzymatic activity than parental cells. In in-vitro studies, OPMBMG cells showed enhanced invasion, proliferation and migration properties. Moreover, it also showed enhanced angiogenesis in matrigel plug assay. Xenografted tumors from OPMBMG cells showed reduced latent period, faster growth rate and nearly five folds reduction in the tumor take in nude mice compared to BMG-1 cells, suggesting that robust metabolic phenotype facilitates tumor formation and growth. OPMBMG cells which were found radio-resistant, showed enhanced radio-sensitization by 2-DG as compared to the parental BMG-1 cells. This study suggests that metabolic reprogramming in cancer cells enhances the potential of migration, invasion and proliferation. It also strengthens the cancer cells to escape the death processes, conferring resistance to therapeutic modalities. Our data also suggest that combining metabolic inhibitors like 2-DG with conventional therapeutic modalities can sensitize such metabolically aggressive cancer cells more than the therapies alone.Keywords: 2-DG, BMG, DNP, OPM-BMG
Procedia PDF Downloads 226848 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset
Procedia PDF Downloads 130847 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 121846 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine
Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence
Procedia PDF Downloads 212845 Development of Generally Applicable Intravenous to Oral Antibiotic Switch Therapy Criteria
Authors: H. Akhloufi, M. Hulscher, J. M. Prins, I. H. Van Der Sijs, D. Melles, A. Verbon
Abstract:
Background: A timely switch from intravenous to oral antibiotic therapy has many advantages, such as reduced incidence of IV-line related infections, a decreased hospital length of stay and less workload for healthcare professionals with equivalent patient safety. Additionally, numerous studies have demonstrated significant decreases in costs of a timely intravenous to oral antibiotic therapy switch, while maintaining efficacy and safety. However, a considerable variation in iv to oral antibiotic switch therapy criteria has been described in literature. Here, we report the development of a set of iv to oral switch criteria that are generally applicable in all hospitals. Material/methods: A RAND-modified Delphi procedure, which was composed of 3 rounds, was used. This Delphi procedure is a widely used structured process to develop consensus using multiple rounds of questionnaires within a qualified panel of selected experts. The international expert panel was multidisciplinary and composed out of clinical microbiologists, infectious disease consultants and clinical pharmacists. This panel of 19 experts appraised 6 major intravenous to oral antibiotic switch therapy criteria and operationalized these criteria using 41 measurable conditions extracted from the literature. The procedure to select a concise set of iv to oral switch criteria included 2 questionnaire rounds and a face-to-face meeting. Results: The procedure resulted in the selection of 16 measurable conditions, which operationalize 6 major intravenous to oral antibiotic switch therapy criteria. The following 6 major switch therapy criteria were selected: (1) Vital signs should be good or improving when bad. (2) Signs and symptoms related to the infection have to be resolved or improved. (3) The gastrointestinal tract has to be intact and functioning. (4) The oral route should not be compromised. (5) Absence of contra-indicated infections. (6) An oral variant of the antibiotic with good bioavailability has to exist. Conclusions: This systematic stepwise method which combined evidence and expert opinion resulted in a feasible set of 6 major intravenous to oral antibiotic switch therapy criteria operationalized by 16 measurable conditions. This set of early antibiotic iv to oral switch criteria can be used in daily practice in all adult hospital patients. Future use in audits and as rules in computer assisted decision support systems will lead to improvement of antimicrobial steward ship programs.Keywords: antibiotic resistance, antibiotic stewardship, intravenous to oral, switch therapy
Procedia PDF Downloads 357844 Using Geo-Statistical Techniques and Machine Learning Algorithms to Model the Spatiotemporal Heterogeneity of Land Surface Temperature and its Relationship with Land Use Land Cover
Authors: Javed Mallick
Abstract:
In metropolitan areas, rapid changes in land use and land cover (LULC) have ecological and environmental consequences. Saudi Arabia's cities have experienced tremendous urban growth since the 1990s, resulting in urban heat islands, groundwater depletion, air pollution, loss of ecosystem services, and so on. From 1990 to 2020, this study examines the variance and heterogeneity in land surface temperature (LST) caused by LULC changes in Abha-Khamis Mushyet, Saudi Arabia. LULC was mapped using the support vector machine (SVM). The mono-window algorithm was used to calculate the land surface temperature (LST). To identify LST clusters, the local indicator of spatial associations (LISA) model was applied to spatiotemporal LST maps. In addition, the parallel coordinate (PCP) method was used to investigate the relationship between LST clusters and urban biophysical variables as a proxy for LULC. According to LULC maps, urban areas increased by more than 330% between 1990 and 2018. Between 1990 and 2018, built-up areas had an 83.6% transitional probability. Furthermore, between 1990 and 2020, vegetation and agricultural land were converted into built-up areas at a rate of 17.9% and 21.8%, respectively. Uneven LULC changes in built-up areas result in more LST hotspots. LST hotspots were associated with high NDBI but not NDWI or NDVI. This study could assist policymakers in developing mitigation strategies for urban heat islandsKeywords: land use land cover mapping, land surface temperature, support vector machine, LISA model, parallel coordinate plot
Procedia PDF Downloads 78843 Influence of Surface Wettability on Imbibition Dynamics of Protein Solution in Microwells
Authors: Himani Sharma, Amit Agrawal
Abstract:
Stability of the Cassie and Wenzel wetting states depends on intrinsic contact angle and geometric features on a surface that was exploited in capturing biofluids in microwells. However, the mechanism of imbibition of biofluids in the microwells is not well implied in terms of wettability of a substrate. In this work, we experimentally demonstrated filling dynamics in hydrophilic and hydrophobic microwells by protein solutions. Towards this, we utilized lotus leaf as a mold to fabricate microwells on a Polydimethylsiloxane (PDMS) surface. Lotus leaf containing micrometer-sized blunt-conical shaped pillars with a height of 8-15 µm and diameter of 3-8 µm were transferred on to PDMS. Furthermore, PDMS surface was treated with oxygen plasma to render the hydrophilic nature. A 10µL droplets containing fluorescein isothiocyanate (FITC) - labelled bovine serum albumin (BSA) were rested on both hydrophobic (θa = 108o, where θa is the apparent contact angle) and hydrophilic (θa = 60o) PDMS surfaces. A time-dependent fluorescence microscopy was conducted on these modified PDMS surfaces by recording the fluorescent intensity over a 5 minute period. It was observed that, initially (at t=1 min) FITC-BSA was accumulated on the periphery of both hydrophilic and hydrophobic microwells due to incomplete penetration of liquid-gas meniscus. This deposition of FITC-BSA on periphery of microwell was not changed with time for hydrophobic surfaces, whereas, a complete filling was occurred in hydrophilic microwells (at t=5 mins). This attributes to a gradual movement of three-phase contact line along the vertical surface of the hydrophilic microwells as compared to stable pinning in the hydrophobic microwells as confirmed by Surface Evolver simulations. In addition, if the cavities are presented on hydrophobic surfaces, air bubbles will be trapped inside the cavities once the aqueous solution is placed over these surfaces, resulting in the Cassie-Baxter wetting state. This condition hinders trapping of proteins inside the microwells. Thus, it is necessary to impart hydrophilicity to the microwell surfaces so as to induce the Wenzel state, such that, an entire solution will be fully in contact with the walls of microwells. Imbibition of microwells by protein solutions was analyzed in terms fluorescent intensity versus time. The present work underlines the importance of geometry of microwells and surface wettability of substrate in wetting and effective capturing of solid sub-phases in biofluids.Keywords: BSA, microwells, surface evolver, wettability
Procedia PDF Downloads 200842 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems
Authors: Joachim F. Sartor
Abstract:
According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage
Procedia PDF Downloads 153841 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering
Abstract:
Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.Keywords: carbon composite, fault detection, fault identification, particle filter
Procedia PDF Downloads 196840 Pattern of Adverse Drug Reactions with Platinum Compounds in Cancer Chemotherapy at a Tertiary Care Hospital in South India
Authors: Meena Kumari, Ajitha Sharma, Mohan Babu Amberkar, Hasitha Manohar, Joseph Thomas, K. L. Bairy
Abstract:
Aim: To evaluate the pattern of occurrence of adverse drug reactions (ADRs) with platinum compounds in cancer chemotherapy at a tertiary care hospital. Methods: It was a retrospective, descriptive case record study done on patients admitted to the medical oncology ward of Kasturba Hospital, Manipal from July to November 2012. Inclusion criteria comprised of patients of both sexes and all ages diagnosed with cancer and were on platinum compounds, who developed at least one adverse drug reaction during or after the treatment period. CDSCO proforma was used for reporting ADRs. Causality was assessed using Naranjo Algorithm. Results: A total of 65 patients was included in the study. Females comprised of 67.69% and rest males. Around 49.23% of the ADRs were seen in the age group of 41-60 years, followed by 20 % in 21-40 years, 18.46% in patients over 60 years and 12.31% in 1-20 years age group. The anticancer agents which caused adverse drug reactions in our study were carboplatin (41.54%), cisplatin (36.92%) and oxaliplatin (21.54%). Most common adverse drug reactions observed were oral candidiasis (21.53%), vomiting (16.92%), anaemia (12.3%), diarrhoea (12.3%) and febrile neutropenia (0.08%). The results of the causality assessment of most of the cases were probable. Conclusion: The adverse effect of chemotherapeutic agents is a matter of concern in the pharmacological management of cancer as it affects the quality of life of patients. This information would be useful in identifying and minimizing preventable adverse drug reactions while generally enhancing the knowledge of the prescribers to deal with these adverse drug reactions more efficiently.Keywords: adverse drug reactions, platinum compounds, cancer, chemotherapy
Procedia PDF Downloads 433839 Heuristics for Optimizing Power Consumption in the Smart Grid
Authors: Zaid Jamal Saeed Almahmoud
Abstract:
Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.Keywords: heuristics, optimization, smart grid, peak demand, power supply
Procedia PDF Downloads 89838 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 309837 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest
Procedia PDF Downloads 189836 Fragment Domination for Many-Objective Decision-Making Problems
Authors: Boris Djartov, Sanaz Mostaghim
Abstract:
This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization
Procedia PDF Downloads 91835 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring
Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield
Abstract:
Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2
Procedia PDF Downloads 224834 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials
Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn
Abstract:
A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging
Procedia PDF Downloads 77833 Artificial Intelligence-Generated Previews of Hyaluronic Acid-Based Treatments
Authors: Ciro Cursio, Giulia Cursio, Pio Luigi Cursio, Luigi Cursio
Abstract:
Communication between practitioner and patient is of the utmost importance in aesthetic medicine: as of today, images of previous treatments are the most common tool used by doctors to describe and anticipate future results for their patients. However, using photos of other people often reduces the engagement of the prospective patient and is further limited by the number and quality of pictures available to the practitioner. Pre-existing work solves this issue in two ways: 3D scanning of the area with manual editing of the 3D model by the doctor or automatic prediction of the treatment by warping the image with hand-written parameters. The first approach requires the manual intervention of the doctor, while the second approach always generates results that aren’t always realistic. Thus, in one case, there is significant manual work required by the doctor, and in the other case, the prediction looks artificial. We propose an AI-based algorithm that autonomously generates a realistic prediction of treatment results. For the purpose of this study, we focus on hyaluronic acid treatments in the facial area. Our approach takes into account the individual characteristics of each face, and furthermore, the prediction system allows the patient to decide which area of the face she wants to modify. We show that the predictions generated by our system are realistic: first, the quality of the generated images is on par with real images; second, the prediction matches the actual results obtained after the treatment is completed. In conclusion, the proposed approach provides a valid tool for doctors to show patients what they will look like before deciding on the treatment.Keywords: prediction, hyaluronic acid, treatment, artificial intelligence
Procedia PDF Downloads 116832 College Faculty Perceptions of Instructional Strategies That Are Effective for Students with Dyslexia
Authors: Samantha R. Dutra
Abstract:
There are many issues that students face in college, such as academic-based struggles, financial issues, family responsibilities, and vocational problems. Students with dyslexia struggle even more with these problems compared to other students. This qualitative study examines faculty perceptions of instructing students with dyslexia. This study is important to the human services and post-secondary educational fields due to the increase in disabled students enrolled in college. This study is also substantial because of the reported bias faced by students with dyslexia and their academic failure. When students with LDs such as dyslexia experience bias, discrimination, and isolation, they are more apt to not seek accommodations, lack communication with faculty, and are more likely to drop out or fail. College students with dyslexia often take longer to complete their post-secondary education and are more likely to withdraw or drop out without earning a degree. Faculty attitudes and academic cultures are major barriers to the success and use of accommodations as well as modified instruction for students with disabilities, which leads to student success. Faculty members are often uneducated or misinformed regarding students with dyslexia. More importantly, many faculty members are unaware of the many ethical and legal implications that they face regarding accommodating students with dyslexia. Instructor expectations can generally be defined as the understanding and perceptions of students regarding their academic success. Skewed instructor expectations can affect how instructors interact with their students and can also affect student success. This is true for students with dyslexia in that instructors may have lower and biased expectations of these students and, therefore, directly impact students’ academic successes and failures. It is vital to understand how instructor attitudes affect the academic achievement of dyslexic students. This study will examine faculty perceptions of instructing students with dyslexia and faculty attitudes towards accommodations and institutional support. The literature concludes that students with dyslexia have many deficits and several learning needs. Furthermore, these are the students with the highest dropout and failure rates, as well as the lowest retention rates. Disabled students generally have many reasons why accommodations and supports just do not help. Some research suggests that accommodations do help students and show positive outcomes. Many improvements need to be made between student support service personnel, faculty, and administrators regarding providing access and adequate supports for students with dyslexia. As the research also suggests, providing more efficient and effective accommodations may increase positive student as well as faculty attitudes in college, and may improve student outcomes overall.Keywords: dyslexia, faculty perception, higher education, learning disability
Procedia PDF Downloads 139831 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 469830 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 509829 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna
Authors: Babatunde Olatujoye, Binbin Yang
Abstract:
Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband
Procedia PDF Downloads 23828 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 141827 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa
Authors: Bernard Moeketsi Hlalele
Abstract:
Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa
Procedia PDF Downloads 128826 Secure Automatic Key SMS Encryption Scheme Using Hybrid Cryptosystem: An Approach for One Time Password Security Enhancement
Authors: Pratama R. Yunia, Firmansyah, I., Ariani, Ulfa R. Maharani, Fikri M. Al
Abstract:
Nowadays, notwithstanding that the role of SMS as a means of communication has been largely replaced by online applications such as WhatsApp, Telegram, and others, the fact that SMS is still used for certain and important communication needs is indisputable. Among them is for sending one time password (OTP) as an authentication media for various online applications ranging from chatting, shopping to online banking applications. However, the usage of SMS does not pretty much guarantee the security of transmitted messages. As a matter of fact, the transmitted messages between BTS is still in the form of plaintext, making it extremely vulnerable to eavesdropping, especially if the message is confidential, for instance, the OTP. One solution to overcome this problem is to use an SMS application which provides security services for each transmitted message. Responding to this problem, in this study, an automatic key SMS encryption scheme was designed as a means to secure SMS communication. The proposed scheme allows SMS sending, which is automatically encrypted with keys that are constantly changing (automatic key update), automatic key exchange, and automatic key generation. In terms of the security method, the proposed scheme applies cryptographic techniques with a hybrid cryptosystem mechanism. Proofing the proposed scheme, a client to client SMS encryption application was developed using Java platform with AES-256 as encryption algorithm, RSA-768 as public and private key generator and SHA-256 for message hashing function. The result of this study is a secure automatic key SMS encryption scheme using hybrid cryptosystem which can guarantee the security of every transmitted message, so as to become a reliable solution in sending confidential messages through SMS although it still has weaknesses in terms of processing time.Keywords: encryption scheme, hybrid cryptosystem, one time password, SMS security
Procedia PDF Downloads 130