Search results for: whole genome sequence
440 Distribution, Settings, and Genesis of Burj-Dolomite Shale-Hosted Copper Mineralization in the Central Wadi Araba, Jordan
Authors: Mohammad Salem Abdullah Al-Hwaiti
Abstract:
The stratiform copper mineralization of the Burj-Dolomite shale (BDS) formations of deposits shows that the copper mineralization within the BDS occurs as hydrated copper chlorides and carbonates (mainly paratacamite and malachite, respectively), while copper silicates (mainly chrysocolla and planchette) are the major ore minerals in the BDS. Thus, on the basis of the petrographic and field occurrence, three main stages operated during the development of the copper ore in the sandy and shaly lithofacies. During the first stage, amorphous chrysocolla replaced clays, feldspars, and quartz. This stage was followed by the transition from an amorphous phase to a better-crystallized phase, i.e., the formation of planchette and veins from chrysocolla. The third stage was the formation of chrysocolla along fracture planes. Other secondary minerals are pseudomalachite, dioptase, neoticite together with authigenic fluorapatite. Paratacamite and malachite, which are common in the dolomitic lithofacies, are relatively rare in the sandy and silty lithofacies. The Rare Earth Elements (REEs) patterns for the BDS showed three stages in the evolution of the Precambrian–Cambrian copper mineralization system, involving the following: (A) Epigenetic mobilization of Cu-bearing solution with formation Cu-carbonate in dolomite and limestone mineralization and Cu-silicate mineralization in sandstone; (B) Transgression of Cambrian Sea and SSC deposition of Cu-sulphides during dolomite diagenesis in the BDS Formation; continued diagenesis and oxidation leads to the formation of Cu(II) minerals; (C) Erosion and supergene enrichment of Cu in basement rocks. Detrital copper-bearing sediments accumulate in the lower Cambrian clastic sequence.Keywords: dolomite shale, copper mineralization, REE, Jordan
Procedia PDF Downloads 83439 Ambisyllabic Conditioning in English: Evidence from the Accent of Nigerian Speakers of English
Authors: Nkereke Mfon Essien
Abstract:
In an ambisyllabic environment, one consonant sound simultaneously assumes both the coda and onset positions of a word due to its structural proclivity to affect two phonological processes or repair two ill-formed sequences in those syllable positions at the same time. This study sets out to examine the structural conditions that trigger this not-so-common phonological privilege for consonant sounds in the English language and Nigerian English and if such constraints could have any correspondence in the language studied. Data for the study were obtained from a native speaker of English who was the control and twenty (20) educated Nigerian speakers of English from the three ethnic/linguistic groups in Nigeria. Preliminary findings from the data show that ambisyllabicity in English is triggered mainly by stress, a condition which causes a consonant in a stressed syllable to become glottalised and simultaneously devoices the nearest voiced consonant in the next syllable. For example, in the word coupler,/'kʌplɜr/ is realized as ['kʌˀpl̥ɜr]. In some Nigerian English, preliminary findings show that ambisyllabicity is triggered by a sequence of intervocalic short, high central vowels and a coda nasal. Since the short vowel may not occur in an open syllable, the nasal serves to close the impermissible open syllable. However, since the Nigerian English foot structure does not permit a CVC.V syllable, the same coda nasal simultaneously repairs the impermissible syllable foot to (CV.CV) by applying the Maximal Onset Principle since this is a preliminary investigation, a conclusion would not suffice yet.Keywords: ambisyllabicity, nasal, coda, stress, phonological process, syllable, foot
Procedia PDF Downloads 18438 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment
Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen
Abstract:
The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome
Procedia PDF Downloads 191437 The Genetic Architecture Underlying Dilated Cardiomyopathy in Singaporeans
Authors: Feng Ji Mervin Goh, Edmund Chee Jian Pua, Stuart Alexander Cook
Abstract:
Dilated cardiomyopathy (DCM) is a common cause of heart failure. Genetic mutations account for 50% of DCM cases with TTN mutations being the most common, accounting for up to 25% of DCM cases. However, the genetic architecture underlying Asian DCM patients is unknown. We evaluated 68 patients (female= 17) with DCM who underwent follow-up at the National Heart Centre, Singapore from 2013 through 2014. Clinical data were obtained and analyzed retrospectively. Genomic DNA was subjected to next-generation targeted sequencing. Nextera Rapid Capture Enrichment was used to capture the exons of a panel of 169 cardiac genes. DNA libraries were sequenced as paired-end 150-bp reads on Illumina MiSeq. Raw sequence reads were processed and analysed using standard bioinformatics techniques. The average age of onset of DCM was 46.1±10.21 years old. The average left ventricular ejection fraction (LVEF), left ventricular diastolic internal diameter (LVIDd), left ventricular systolic internal diameter (LVIDs) were 26.1±11.2%, 6.20±0.83cm, and 5.23±0.92cm respectively. The frequencies of mutations in major DCM-associated genes were as follows TTN (5.88% vs published frequency of 20%), LMNA (4.41% vs 6%), MYH7 (5.88% vs 4%), MYH6 (5.88% vs 4%), and SCN5a (4.41% vs 3%). The average callability at 10 times coverage of each major gene were: TTN (99.7%), LMNA (87.1%), MYH7 (94.8%), MYH6 (95.5%), and SCN5a (94.3%). In conclusion, TTN mutations are not common in Singaporean DCM patients. The frequencies of other major DCM-associated genes are comparable to frequencies published in the current literature.Keywords: heart failure, dilated cardiomyopathy, genetics, next-generation sequencing
Procedia PDF Downloads 243436 DNA Barcoding Application in Study of Icthyo- Biodiversity in Rivers of Pakistan
Authors: Asma Karim
Abstract:
Fish taxonomy plays a fundamental role in the study of biodiversity. However, traditional methods of fish taxonomy rely on morphological features, which can lead to confusion due to great similarities between closely related species. To overcome this limitation, modern taxonomy employs DNA barcoding as a species identification method. This involves using a short standardized mitochondrial DNA region as a barcode, specifically a 658 base pair fragment near the 5′ ends of the mitochondrial cytochrome c oxidase subunit 1 (CO1) gene, to exploit the diversity in this region for identification of species. To test the effectiveness and reliability of DNA barcoding, 25 fish specimens from nine different fish species found in various rivers of Pakistan were identified morphologically using a dichotomous key at the start of the study. Comprising nine freshwater fish species, including Mystus cavasius, Mystus bleekeri, Osteobrama cotio, Labeo rohita, Labeo culbasu, Labeo gonius, Cyprinus carpio, Catla catla and Cirrhinus mrigala from different rivers of Pakistan were used in the present study. DNA was extracted from one of the pectoral fins and a partial sequence of CO1 gene was amplified using the conventional PCR method. Analysis of the barcodes confirmed that genetically identified fishes were the same as those identified morphologically at the beginning of the study. The sequences were also analyzed for biodiversity and phylogenetic studies. Based on the results of the study, it can be concluded that DNA barcoding is an effective and reliable method for studying biodiversity and conducting phylogenetic analysis of different fish species in Pakistan.Keywords: DNA barcoding, fresh water fishes, taxonomy, biodiversity, Pakistan
Procedia PDF Downloads 108435 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study
Authors: Atif Zafar, Fan Haijun
Abstract:
A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis
Procedia PDF Downloads 364434 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism
Procedia PDF Downloads 280433 Modeling Operating Theater Scheduling and Configuration: An Integrated Model in Health-Care Logistics
Authors: Sina Keyhanian, Abbas Ahmadi, Behrooz Karimi
Abstract:
We present a multi-objective binary programming model which considers surgical cases are scheduling among operating rooms and the configuration of surgical instruments in limited capacity hospital trays, simultaneously. Many mathematical models have been developed previously in the literature addressing different challenges in health-care logistics such as assigning operating rooms, leveling beds, etc. But what happens inside the operating rooms along with the inventory management of required instruments for various operations, and also their integration with surgical scheduling have been poorly discussed. Our model considers the minimization of movements between trays during a surgery which recalls the famous cell formation problem in group technology. This assumption can also provide a major potential contribution to robotic surgeries. The tray configuration problem which consumes surgical instruments requirement plan (SIRP) and sequence of surgical procedures based on required instruments (SIRO) is nested inside the bin packing problem. This modeling approach helps us understand that most of the same-output solutions will not be necessarily identical when it comes to the rearrangement of surgeries among rooms. A numerical example has been dealt with via a proposed nested simulated annealing (SA) optimization approach which provides insights about how various configurations inside a solution can alter the optimal condition.Keywords: health-care logistics, hospital tray configuration, off-line bin packing, simulated annealing optimization, surgical case scheduling
Procedia PDF Downloads 282432 Replica-Exchange Metadynamics Simulations of G-Quadruplex DNA Structures Under Substitution of K+ by Na+ Ions
Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria
Abstract:
The DNA G-quadruplex is a four-stranded DNA structure conformed by stacked planes of four base paired guanines (G-quartet). The guanine rich DNA sequences are present in many sites of genomic DNA and can potentially lead to the formation of G-quadruplexes, especially at the 3'-terminus of the human telomeric DNA with many TTAGGG repeats. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to regulate oncogene expression making the G-quadruplex an attractive target for anticancer therapy. Clearly, the G-quadruplex structured in the telomeric DNA is of fundamental importance for rational drug design. In this context, we investigate two G-quadruplex structures, the first follows from the sequence TTAGGG(TTAGGG)3TT (HUT1), and the second from AAAGGG(TTAGGG)3AA (HUT2), both in a K+ solution. We determine the free energy surfaces of the HUT1 and HUT2 structures and investigate their conformations using replica-exchange metadynamics simulations. The carbonyl-carbonyl distances belonging to different guanines residues are selected as the main collective variables to determine the free energy surfaces. The surfaces exhibit two main local minima, compatible with experiments on the conformational transformations of HUT1 and HUT2 under substitution of the K+ ions by the Na+ ions. The conformational transitions are not observed in short MD simulations without the use of the metadynamics approach. The results of this work should be of help to understand the formation and stability of human telomeric G-quadruplex in environments including the presence of K+ and Na+ ions.Keywords: g-quadruplex, metadynamics, molecular dynamics, replica-exchange
Procedia PDF Downloads 346431 A Grey-Box Text Attack Framework Using Explainable AI
Authors: Esther Chiramal, Kelvin Soh Boon Kai
Abstract:
Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.Keywords: BERT, explainable AI, Grey-box text attack, transformer
Procedia PDF Downloads 137430 A Twelve-Week Intervention Programme to Improve the Gross Motor Skills of Selected Children Diagnosed with Autism Spectrum Disorder
Authors: Eileen K. Africa, Karel J. van Deventer
Abstract:
Neuro-typical children develop the motor skills necessary to play, do schoolwork and interact with others. However, this is not observed in children who have learning or behavioural problems. Children with Autism Spectrum Disorder (ASD) are often referred to as clumsy because their body parts do not work well together in a sequence. Physical Activity (PA) has shown to be beneficial to the general population, therefore, providing children with ASD opportunities to take part in PA programmes, could prove to be beneficial in many ways and should be investigated. The purpose of this study was to design a specialised group intervention programme, to attempt to improve gross motor skills of selected children diagnosed with ASD between the ages of eight and 13 years. A government school for ASD learners was recruited to take part in this study, and a sample of convenience (N=7) was selected. Children in the experimental group (n=4) participated in a 12-week group intervention programme twice per week, while the control group continued with their normal daily routine. The Movement Assessment Battery for Children-Second Edition (MABC-2), was administered pre- and post-test to determine the children’s gross motor proficiency and to determine if the group intervention programme had an effect on the gross motor skills of the experimental group. Statistically significant improvements were observed in total motor skill proficiency (p < 0.05), of the experimental group. These results demonstrate the importance of gross motor skills interventions for children diagnosed with ASD. Future research should include more participants to ensure that the results can be generalised.Keywords: autism spectrum disorder, children, gross motor skills, group intervention programme
Procedia PDF Downloads 295429 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis
Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal
Abstract:
Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix
Procedia PDF Downloads 96428 Management and Genetic Characterization of Local Sheep Breeds for Better Productive and Adaptive Traits
Authors: Sonia Bedhiaf-Romdhani
Abstract:
The sheep (Ovis aries) was domesticated, approximately 11,000 years ago (YBP), in the Fertile Crescent from Asian Mouflon (Ovis Orientalis). The Northern African (NA) sheep is 7,000 years old, represents a remarkable diversity of sheep populations reared under traditional and low input farming systems (LIFS) over millennia. The majority of small ruminants in developing countries are encountered in low input production systems and the resilience of local communities in rural areas is often linked to the wellbeing of small ruminants. Regardless of the rich biodiversity encountered in sheep ecotypes there are four main sheep breeds in the country with 61,6 and 35.4 percents of Barbarine (fat tail breed) and Queue Fine de l’Ouest (thin tail breed), respectively. Phoenicians introduced the Barbarine sheep from the steppes of Central Asia in the Carthaginian period, 3000 years ago. The Queue Fine de l’Ouest is a thin-tailed meat breed heavily concentrated in the Western and the central semi-arid regions. The Noire de Thibar breed, involving mutton-fine wool producing animals, has been on the verge of extinction, it’s a composite black coated sheep breed found in the northern sub-humid region because of its higher nutritional requirements and non-tolerance of the prevailing harsher condition. The D'Man breed, originated from Morocco, is mainly located in the southern oases of the extreme arid ecosystem. A genetic investigation of Tunisian sheep breeds using a genome-wide scan of approximately 50,000 SNPs was performed. Genetic analysis of relationship between breeds highlighted the genetic differentiation of Noire de Thibar breed from the other local breeds, reflecting the effect of past events of introgression of European gene pool. The Queue Fine de l’Ouest breed showed a genetic heterogeneity and was close to Barbarine. The D'Man breed shared a considerable gene flow with the thin-tailed Queue Fine de l'Ouest breed. Native small ruminants breeds, are capable to be efficiently productive if essential ingredients and coherent breeding schemes are implemented and followed. Assessing the status of genetic variability of native sheep breeds could provide important clues for research and policy makers to devise better strategies for the conservation and management of genetic resources.Keywords: sheep, farming systems, diversity, SNPs.
Procedia PDF Downloads 147427 Characterization of Bacteriophage for Biocontrol of Pseudomonas syringae, Causative Agent of Canker in Prunus spp.
Authors: Mojgan Rabiey, Shyamali Roy, Billy Quilty, Ryan Creeth, George Sundin, Robert W. Jackson
Abstract:
Bacterial canker is a major disease of Prunus species such as cherry (Prunus avium). It is caused by Pseudomonas syringae species including P. syringae pv. syringae (Pss) and P. syringae pv. morsprunorum race 1 (Psm1) and race 2 (Psm2). Concerns over the environmental impact of, and developing resistance to, copper controls call for alternative approaches to disease management. One method of control could be achieved using naturally occurring bacteriophage (phage) infective to the bacterial pathogens. Phages were isolated from soil, leaf, and bark of cherry trees in five locations in the South East of England. The phages were assessed for their host range against strains of Pss, Psm1, and Psm2. The phages exhibited a differential ability to infect and lyse different Pss and Psm isolates as well as some other P. syringae pathovars. However, the phages were unable to infect beneficial bacteria such as Pseudomonas fluorescens. A subset of 18 of these phages were further characterised genetically (Random Amplification of Polymorphic DNA-PCR fingerprinting and sequencing) and using electron microscopy. The phages are tentatively identified as belonging to the order Caudovirales and the families Myoviridae, Podoviridae, and Siphoviridae, with genetic material being dsDNA. Future research will fully sequence the phage genomes. The efficacy of the phage, both individually and in cocktails, to reduce disease progression in vivo will be investigated to understand the potential for practical use of these phages as biocontrol agents.Keywords: bacteriophage, pseudomonas, bacterial cancker, biological control
Procedia PDF Downloads 151426 Identification of Babesia ovis Through Polymerase Chain Reaction in Sheep and Goat in District Muzaffargarh, Pakistan
Authors: Muhammad SAFDAR, Mehmet Ozaslan, Musarrat Abbas Khan
Abstract:
Babesiosis is a haemoparasitic disease due to the multiplication of protozoan’s parasite, Babesia ovis in the red blood cells of the host, and contributes numerous economical losses, including sheep and goat ruminants. The early identification and successful treatment of Babesia Ovis spp. belong to the key steps of control and health management of livestock resources. The objective of this study was to construct a polymerase chain reaction (PCR) based method for the detection of Babesia spp. in small ruminants and to determine the risk factors involved in the spreading of babesiosis infections. A total of 100 blood samples were collected from 50 sheep and 50 goats along with different areas of Muzaffargarh, Pakistan, from randomly selected herds. Data on the characteristics of sheep and goats were collected through questionnaires. Of 100 blood samples examined, 18 were positive for Babesia ovis upon microscopic studies, whereas 11 were positive for the presence of Babesia spp. by PCR assay. For the recognition of parasitic DNA, a set of 500bp oligonucleotide was designed by PCR amplification with sequence 18S rRNA gene for B. ovis. The prevalence of babesiosis in small ruminant’s sheep and goat detected by PCR was significantly higher in female animals (28%) than male herds (08%). PCR analysis of the reference samples showed that the detection limit of the PCR assay was 0.01%. Taken together, all data indicated that this PCR assay was a simple, fast, specific detection method for Babesia ovis species in small ruminants compared to other available methods.Keywords: Babesia ovis, PCR amplification, 18S rRNA, sheep and goat
Procedia PDF Downloads 126425 The Big Bang Was Not the Beginning, but a Repeating Pattern of Expansion and Contraction of the Spacetime
Authors: Amrit Ladhani
Abstract:
The cyclic universe theory is a model of cosmic evolution according to which the universe undergoes endless cycles of expansion and cooling, each beginning with a “big bang” and ending in a “big crunch”. In this paper, we propose a unique property of Space-time. This particular and marvelous nature of space shows us that space can stretch, expand, and shrink. This property of space is caused by the size of the universe change over time: growing or shrinking. The observed accelerated expansion, which relates to the stretching of Shrunk space for the new theory, is derived. This theory is based on three underlying notions: First, the Big Bang is not the beginning of Space-time, but rather, at the very beginning fraction of a second, there was an infinite force of infinite Shrunk space in the cosmic singularity that force gave rise to the big bang and caused the rapidly growing of space, and all other forms of energy are transformed into new matter and radiation and a new period of expansion and cooling begins. Second, there was a previous phase leading up to it, with multiple cycles of contraction and expansion that repeat indefinitely. Third, the two principal long-range forces are the gravitational force and the repulsive force generated by shrink space. They are the two most fundamental quantities in the universe that govern cosmic evolution. They may provide the clockwork mechanism that operates our eternal cyclic universe. The universe will not continue to expand forever; no need, however, for dark energy and dark matter. This new model of Space-time and its unique properties enables us to describe a sequence of events from the Big Bang to the Big Crunch.Keywords: dark matter, dark energy, cosmology, big bang and big crunch
Procedia PDF Downloads 78424 Apolipoprotein A1 -75 G to a Substitution and Its Relationship with Serum ApoA1 Levels among Indian Punjabi Population
Authors: Savjot Kaur, Mridula Mahajan, AJS Bhanwer, Santokh Singh, Kawaljit Matharoo
Abstract:
Background: Disorders of lipid metabolism and genetic predisposition are CAD risk factors. ApoA1 is the apolipoprotein component of anti-atherogenic high density lipoprotein (HDL) particles. The protective action of HDL and ApoA1 is attributed to their central role in reverse cholesterol transport (RCT). Aim: This study was aimed at identifying sequence variations in ApoA1 (-75G>A) and its association with serum ApoA1 levels. Methods: A total of 300 CAD patients and 300 Normal individuals (controls) were analyzed. PCR-RFLP method was used to determine the DNA polymorphism in the ApoA1 gene, PCR products digested with restriction enzyme MspI, followed by Agarose Gel Electrophoresis. Serum apolipoprotein A1 concentration was estimated with immunoturbidimetric method. Results: Deviation from Hardy- Weinberg Equilibrium (HWE) was observed for this gene variant. The A- allele frequency was higher among Coronary Artery disease patients (53.8) compared to controls (45.5), p= 0.004, O.R= 1.38(1.11-1.75). Under recessive model analysis (AA vs. GG+GA) AA genotype of ApoA1 G>A substitution conferred ~1 fold increased risk towards CAD susceptibility (p= 0.002, OR= 1.72(1.2-2.43). With serum ApoA1 levels < 107 A allele frequency was higher among CAD cases (50) as compared to controls (43.4) [p=0.23, OR= 1.2(0.84-2)] and there was zero % occurrence of A allele frequency in individuals with ApoA1 levels > 177. Conclusion: Serum ApoA1 levels were associated with ApoA1 promoter region variation and influence CAD risk. The individuals with the APOA1 -75 A allele confer excess hazard of developing CAD as a result of its effect on low serum concentrations of ApoA1.Keywords: apolipoprotein A1 (G>A) gene polymorphism, coronary artery disease (CAD), reverse cholesterol transport (RCT)
Procedia PDF Downloads 315423 Reuse of Municipal Solid Waste Incinerator Fly Ash for the Synthesis of Zeolite: Effects of Different Operation Conditions
Authors: Jyh-Cherng Chen, Yi-Jie Lin
Abstract:
This study tries to reuse the fly ash of municipal solid waste incinerator (MSWI) for the synthesis of zeolites. The fly ashes were treated with NaOH alkali fusion at different temperatures for 40 mins and then synthesized the zeolites with hydrothermal method at 105oC for different operation times. The effects of different operation conditions and the optimum synthesis parameters were explored. The specific surface area, surface morphology, species identification, adsorption capacity, and the reuse potentials of the synthesized zeolites were analyzed and evaluated. Experimental results showed that the optimum operation conditions for the synthesis of zeolite from the mixed fly ash were Si/Al=20, alkali/ash=1.5, alkali fusion reaction with NaOH at 800oC for 40 mins, hydrolysis with L/S=200 at 105oC for 24 hr, and hydrothermal synthesis at 105oC for 48 hr. The largest specific surface area of synthesized zeolite could be increased to 943.05m2/g. The influence of different operation parameters on the synthesis of zeolite from mixed fly ash followed the sequence of Si/Al > hydrolysis L/S> hydrothermal time > alkali fusion temperature > alkali/ash ratio. The XRD patterns of synthesized zeolites were identified to be similar with the ZSM-23 zeolite. The adsorption capacities of synthesized zeolite for pollutants were increased as rising the specific surface area of synthesized zeolite. In summary, MSWI fly ash can be treated and reused to synthesize the zeolite with high specific surface area by the alkali fusion and hydrothermal method. The zeolite can be reuse for the adsorption of various pollutants. They have great potential for development.Keywords: alkali fusion, hydrothermal, fly ash, zeolite
Procedia PDF Downloads 174422 Evaluating Psychosocial Influence of Dental Aesthetics: A Cross-Sectional Study
Authors: Mahjabeen Akbar
Abstract:
Dental aesthetics and its associated psychosocial influence have a significant impact on individuals. Correcting malocclusions is a key motivating factor for majority patients; however, psychosocial factors have been rarely incorporated in evaluating malocclusions. Therefore, it is necessary to study the psychosocial influence of malocclusion in patients. The study aimed to determine the psychosocial influence of dental aesthetics in dental students by the ‘Psychosocial Impact of Dental Aesthetics Questionnaire’ and self-rated Aesthetic Component of the Index of Orthodontic Treatment Need (IOTN). This was a quantitative study using a cross-sectional study design. One hundred twenty dental students (71 females and 49 males; mean age 24.5) were selected via purposive sampling from July to August 2019. Dental students with no former orthodontic treatment were requested to fill out the ‘Psychosocial Impact of Dental Aesthetics Questionnaire.’ Variables including; self-confidence/insecurity, social influence, psychological influence and self-perception of the need of an orthodontic treatment were evaluated by a sequence of statements, while dental aesthetics were evaluated by using the IOTN Aesthetic Component. To determine the significance, the Kruskal-Wallis test was utilized. The results show that all four variables measuring psychosocial impact indicated significant correlations with the perceived malocclusions with a p-value of less than 0.01. The results conclude there is a strong psychological and social influence of altered dental aesthetics on an individual. Moreover, the relationship between the IOTN-AC grading with the psychosocial wellbeing of an individual stands proven, indicating that the perception of altered dental aesthetics is as important as a factor in treatment need as the amount of malocclusion.Keywords: dental aesthetics, malocclusion, psychosocial influence, dental students
Procedia PDF Downloads 151421 LanE-change Path Planning of Autonomous Driving Using Model-Based Optimization, Deep Reinforcement Learning and 5G Vehicle-to-Vehicle Communications
Authors: William Li
Abstract:
Lane-change path planning is a crucial and yet complex task in autonomous driving. The traditional path planning approach based on a system of carefully-crafted rules to cover various driving scenarios becomes unwieldy as more and more rules are added to deal with exceptions and corner cases. This paper proposes to divide the entire path planning to two stages. In the first stage the ego vehicle travels longitudinally in the source lane to reach a safe state. In the second stage the ego vehicle makes lateral lane-change maneuver to the target lane. The paper derives the safe state conditions based on lateral lane-change maneuver calculation to ensure collision free in the second stage. To determine the acceleration sequence that minimizes the time to reach a safe state in the first stage, the paper proposes three schemes, namely, kinetic model based optimization, deep reinforcement learning, and 5G vehicle-to-vehicle (V2V) communications. The paper investigates these schemes via simulation. The model-based optimization is sensitive to the model assumptions. The deep reinforcement learning is more flexible in handling scenarios beyond the model assumed by the optimization. The 5G V2V eliminates uncertainty in predicting future behaviors of surrounding vehicles by sharing driving intents and enabling cooperative driving.Keywords: lane change, path planning, autonomous driving, deep reinforcement learning, 5G, V2V communications, connected vehicles
Procedia PDF Downloads 252420 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 122419 Project Time and Quality Management during Construction
Authors: Nahed Al-Hajeri
Abstract:
Time and cost is an integral part of every construction plan and can affect each party’s contractual obligations. The performance of both time and cost are usually important to the client and contractor during the project. Almost all construction projects are experiencing time overrun. These time overruns always contributed as expensive to both client and contractor. Construction of any project inside the gathering centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. It also involves many agencies interdependent on each other like the vendors, structural and functional designers including various types of specialized engineers and it includes support of contractors and specialized contractors. This paper mainly highlights the types of construction delays due to which project suffer time and cost overrun. This paper also speaks about the delay causes and factors that contribute to the construction sequence delay for the oil and gas projects. Construction delay is supposed to be one of the repeated problems in the construction projects and it has an opposing effect on project success in terms of time, cost and quality. Some effective methods are identified to minimize delays in construction projects such as: 1. Site management and supervision, 2. Effective strategic planning, 3. Clear information and communication channel. Our research paper studies the types of delay with some real examples with statistic results and suggests solutions to overcome this problem.Keywords: non-compensable delay, delays caused by force majeure, compensable delay, delays caused by the owner or the owner’s representative, non-excusable delay, delay caused by the contractor or the contractor’s representative, concurrent delay, delays resulting from two separate causes at the same time
Procedia PDF Downloads 242418 The Instablity of TetM Gene Encode Tetracycline Resistance Gene in Lactobacillus casei FNCC 0090
Authors: Sarah Devi Silvian, Hanna Shobrina Iqomatul Haq, Fara Cholidatun Nabila, Agustin Krisna Wardani
Abstract:
Bacteria ability to survive in antibiotic is controlled by the presence of gene that encodes the antibiotic resistance protein. The instability of the antibiotic resistance gene can be observed by exposing the bacteria under the lethal dose of antibiotic. Low concentration of antibiotic can induce mutation, which may take a role in bacterial adaptation through the antibiotic concentration. Lactobacillus casei FNCC 0090 is one of the probiotic bacteria that has an ability to survive in tetracycline by expressing the tetM gene. The aims of this study are to observe the possibilities of mutation happened in L.casei FNCC 0090 by exposing in sub-lethal dose of tetracycline and also observing the instability of the tetM gene by comparing the sequence between the wild type and mutant. L.casei FNCC 0090 has a lethal dose in 60 µg/ml, low concentration is applied to induce the mutation, the range from 10 µg/ml, 15 µg/ml, 30 µg/ml, 45 µg/ml, and 50 µg/ml. L.casei FNCC 0090 is exposed to the low concentration from lowest to the highest concentration to induce the adaptation. Plasmid is isolated from the highest concentration culture which is 50 µg/ml by using modified alkali lysis method with the addition of lysozyme. The tetM gene is isolated by using PCR (Polymerase Chain Reaction) method, then PCR amplicon is purified and sequenced. Sequencing is done on both samples, wild type and mutant. Both sequences are compared and the mutations can be traced in the presence of nucleotides changes. The changing of the nucleotides means that the tetM gene is instable.Keywords: L. casei FNCC 0090, probiotic, tetM, tetracycline
Procedia PDF Downloads 188417 Genetic Diversity and Variation of Nigerian Pigeon (Columba livia domestica) Populations Based on the Mitochondrial Coi Gene
Authors: Foluke E. Sola-Ojo, Ibraheem A. Abubakar, Semiu F. Bello, Isiaka H. Fatima, Sule Bisola, Adesina M. Olusegun, Adeniyi C. Adeola
Abstract:
The domesticated pigeon, Columba livia domestica, has many valuable characteristics, including high nutritional value and fast growth rate. There is a lack of information on its genetic diversity in Nigeria; thus, the genetic variability in mitochondrial cytochrome oxidase subunit I (COI) sequences of 150 domestic pigeons from four different locations was examined. Three haplotypes (HT) were identified in Nigerian populations; the most common haplotype, HT1, was shared with wild and domestic pigeons from Europe, America, and Asia, while HT2 and HT3 were unique to Nigeria. The overall haplotype diversity was 0.052± 0.025, and nucleotide diversity was 0.026± 0.068 across the four investigated populations. The phylogenetic tree showed significant clustering and genetic relationship of Nigerian domestic pigeons with other global pigeons. The median-joining network showed a star-like pattern suggesting population expansion. AMOVA results indicated that genetic variations in Nigerian pigeons mainly occurred within populations (99.93%), while the Neutrality tests results suggested that the Nigerian domestic pigeons’ population experienced recent expansion. This study showed a low genetic diversity and population differentiation among Nigerian domestic pigeons consistent with a relatively conservative COI sequence with few polymorphic sites. Furthermore, the COI gene could serve as a candidate molecular marker to investigate the genetic diversity and origin of pigeon species. The current data is insufficient for further conclusions; therefore, more research evidence from multiple molecular markers is required.Keywords: Nigeria pigeon, COI, genetic diversity, genetic variation, conservation
Procedia PDF Downloads 195416 An Efficient Aptamer-Based Biosensor Developed via Irreversible Pi-Pi Functionalisation of Graphene/Zinc Oxide Nanocomposite
Authors: Sze Shin Low, Michelle T. T. Tan, Poi Sim Khiew, Hwei-San Loh
Abstract:
An efficient graphene/zinc oxide (PSE-G/ZnO) platform based on pi-pi stacking, non-covalent interactions for the development of aptamer-based biosensor was presented in this study. As a proof of concept, the DNA recognition capability of the as-developed PSE-G/ZnO enhanced aptamer-based biosensor was evaluated using Coconut Cadang-cadang viroid disease (CCCVd). The G/ZnO nanocomposite was synthesised via a simple, green and efficient approach. The pristine graphene was produced through a single step exfoliation of graphite in sonochemical alcohol-water treatment while the zinc nitrate hexahydrate was mixed with the graphene and subjected to low temperature hydrothermal growth. The developed facile, environmental friendly method provided safer synthesis procedure by eliminating the need of harsh reducing chemicals and high temperature. The as-prepared nanocomposite was characterised by X-ray diffractometry (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) to evaluate its crystallinity, morphology and purity. Electrochemical impedance spectroscopy (EIS) was employed for the detection of CCCVd sequence with the use of potassium ferricyanide (K3[Fe(CN)6]). Recognition of the RNA analytes was achieved via the significant increase in resistivity for the double stranded DNA, as compared to single-stranded DNA. The PSE-G/ZnO enhanced aptamer-based biosensor exhibited higher sensitivity than the bare biosensor, attributing to the synergistic effect of high electrical conductivity of graphene and good electroactive property of ZnO.Keywords: aptamer-based biosensor, graphene/zinc oxide nanocomposite, green synthesis, screen printed carbon electrode
Procedia PDF Downloads 369415 The Biological Function and Clinical Significance of Long Non-coding RNA LINC AC008063 in Head and Neck Squamous Carcinoma
Authors: Maierhaba Mijiti
Abstract:
Objective:The aim is to understand the relationship between the expression level of the long-non-coding RNA LINC AC008063 and the clinicopathological parameters of patients with head and neck squamous cell carcinoma (HNSCC), and to clarify the biological function of LINC AC008063 in HNSCC cells. Moreover, it provides a potential biomarker for the diagnosis, treatment, and prognosis evaluation of HNSCC. Methods: The expression level of LINC AC008063 in the HNSCC was analyzed using transcriptome sequencing data from the TCGA (The cancer genome atlas) database. The expression levels of LINC AC008063 in human embryonic lung diploid cells 2BS, human immortalized keratinocytes HACAT, HNSCC cell lines CAL-27, Detroit562, AMC-HN-8, FD-LSC-1, FaDu and WSU-HN30 were determined by real-time quantitative PCR (qPCR). RNAi (RNA interference) was introduced for LINC AC008063 knockdown in HNSCC cell lines, the localization and abundance analysis of LINC AC008063 was determined by RT-qPCR, and the biological functions were examined by CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assay. Results: LINC AC008063 was upregulated in HNSCC tissue (P<0.001), and verified b CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assayy qPCR in HNSCC cell lines. The survival analysis revealed that the overall survival rate (OS) of patients with high LINC AC008063 expression group was significantly lower than that in the LINC AC008063 expression group, the median survival times for the two groups were 33.10 months and 61.27 months, respectively (P=0.002). The clinical correlation analysis revealed that its expression was positively correlated with the age of patients with HNSCC (P<0.001) and positively correlated with pathological state (T3+T4>T1+T2, P=0.03). The RT-qPCR results showed that LINC AC008063 was mainly enriched in cytoplasm (P=0.01). Knockdown of LINC AC008063 inhibited proliferation, colony formation, migration and invasion; the glycolytic capacity was significantly decreased in HNSCC cell lines (P<0.05). Conclusion: High level of LINC AC008063 was associated with the malignant progression of HNSCC as well as promoting the important biological functions of proliferation, colony formation, migration and invasion; in particular, the glycolytic capacity was decreased in HNSCC cells. Therefore, LINC AC008063 may serve as a potential biomarker for HNSCC and a distinct molecular target to inhibit glycolysis.Keywords: head and neck squamous cell carcinoma, oncogene, long non-coding RNA, LINC AC008063, invasion and metastasis
Procedia PDF Downloads 11414 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis
Authors: Danni Cheng
Abstract:
Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin
Procedia PDF Downloads 99413 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 275412 The Role of Information Technology in Supply Chain Management
Authors: V. Jagadeesh, K. Venkata Subbaiah, P. Govinda Rao
Abstract:
This paper explaining about the significance of information technology tools and software packages in supply chain management (SCM) in order to manage the entire supply chain. Managing materials flow and financial flow and information flow effectively and efficiently with the aid of information technology tools and packages in order to deliver right quantity with right quality of goods at right time by using right methods and technology. Information technology plays a vital role in streamlining the sales forecasting and demand planning and Inventory control and transportation in supply networks and finally deals with production planning and scheduling. It achieves the objectives by streamlining the business process and integrates within the enterprise and its extended enterprise. SCM starts with customer and it involves sequence of activities from customer, retailer, distributor, manufacturer and supplier within the supply chain framework. It is the process of integrating demand planning and supply network planning and production planning and control. Forecasting indicates the direction for planning raw materials in order to meet the production planning requirements. Inventory control and transportation planning allocate the optimal or economic order quantity by utilizing shortest possible routes to deliver the goods to the customer. Production planning and control utilize the optimal resources mix in order to meet the capacity requirement planning. The above operations can be achieved by using appropriate information technology tools and software packages for the supply chain management.Keywords: supply chain management, information technology, business process, extended enterprise
Procedia PDF Downloads 376411 Selection of Soil Quality Indicators of Rice Cropping Systems Using Minimum Data Set Influenced by Imbalanced Fertilization
Authors: Theresa K., Shanmugasundaram R., Kennedy J. S.
Abstract:
Nutrient supplements are indispensable for raising crops and to reap determining productivity. The nutrient imbalance between replenishment and crop uptake is attempted through the input of inorganic fertilizers. Excessive dumping of inorganic nutrients in soil cause stagnant and decline in yield. Imbalanced N-P-K ratio in the soil exacerbates and agitates the soil ecosystems. The study evaluated the fertilization practices of conventional (CFs), organic and Integrated Nutrient Management system (INM) on soil quality using key indicators and soil quality indices. Twelve rice farming fields of which, ten fields were having conventional cultivation practices, one field each was organic farming based and INM based cultivated under monocropping sequence in the Thondamuthur block of Coimbatore district were fixed and properties viz., physical, chemical and biological were studied for four cropping seasons to determine soil quality index (SQI). SQI was computed for conventional, organic and INM fields. Comparing conventional farming (CF) with organic and INM, CF was recorded with a lower soil quality index. While in organic and INM fields, the higher SQI value of 0.99 and 0.88 respectively were registered. CF₄ received with a super-optimal dose of N (250%) showed a lesser SQI value (0.573) as well as the yield (3.20 t ha⁻¹) and the CF6 which received 125 % N recorded the highest SQI (0.715) and yield (6.20 t ha⁻¹). Likewise, most of the CFs received higher N beyond the level of 125 % except CF₃ and CF₉, which recorded lower yields. CFs which received super-optimal P in the order of CF₆&CF₇>CF₁&CF₁₀ recorded lesser yields except for CF₆. Super-optimal K application also recorded lesser yield in CF₄, CF₇ and CF₉.Keywords: rice cropping system, soil quality indicators, imbalanced fertilization, yield
Procedia PDF Downloads 157