Search results for: Omicron variant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 282

Search results for: Omicron variant

132 Lennox-gastaut Syndrome Associated with Dysgenesis of Corpus Callosum

Authors: A. Bruce Janati, Muhammad Umair Khan, Naif Alghassab, Ibrahim Alzeir, Assem Mahmoud, M. Sammour

Abstract:

Rationale: Lennox-Gastaut syndrome(LGS) is an electro-clinical syndrome composed of the triad of mental retardation, multiple seizure types, and the characteristic generalized slow spike-wave complexes in the EEG. In this article, we report on two patients with LGS whose brain MRI showed dysgenesis of corpus callosum(CC). We review the literature and stress the role of CC in the genesis of secondary bilateral synchrony(SBS). Method: This was a clinical study conducted at King Khalid Hospital. Results: The EEG was consistent with LGS in patient 1 and unilateral slow spike-wave complexes in patient 2. The MRI showed hypoplasia of the splenium of CC in patient 1, and global hypoplasia of CC combined with Joubert syndrome in patient 2. Conclusion: Based on the data, we proffer the following hypotheses: 1-Hypoplasia of CC interferes with functional integrity of this structure. 2-The genu of CC plays a pivotal role in the genesis of secondary bilateral synchrony. 3-Electrodecremental seizures in LGS emanate from pacemakers generated in the brain stem, in particular the mesencephalon projecting abnormal signals to the cortex via thalamic nuclei. 4-Unilateral slow spike-wave complexes in the context of mental retardation and multiple seizure types may represent a variant of LGS, justifying neuroimaging studies.

Keywords: EEG, Lennox-Gastaut syndrome, corpus callosum , MRI

Procedia PDF Downloads 442
131 Synchronized Vehicle Routing for Equitable Resource Allocation in Food Banks

Authors: Rabiatu Bonku, Faisal Alkaabneh

Abstract:

Inspired by a food banks distribution operation for non-profit organization, we study a variant synchronized vehicle routing problem for equitable resource allocation. This research paper introduces a Mixed Integer Programming (MIP) model aimed at addressing the complex challenge of efficiently distributing vital resources, particularly for food banks serving vulnerable populations in urban areas. Our optimization approach places a strong emphasis on social equity, ensuring a fair allocation of food to partner agencies while minimizing wastage. The primary objective is to enhance operational efficiency while guaranteeing fair distribution and timely deliveries to prevent food spoilage. Furthermore, we assess four distinct models that consider various aspects of sustainability, including social and economic factors. We conduct a comprehensive numerical analysis using real-world data to gain insights into the trade-offs that arise, while also demonstrating the models’ performance in terms of fairness, effectiveness, and the percentage of food waste. This provides valuable managerial insights for food bank managers. We show that our proposed approach makes a significant contribution to the field of logistics optimization and social responsibility, offering valuable insights for improving the operations of food banks.

Keywords: food banks, humanitarian logistics, equitable resource allocation, synchronized vehicle routing

Procedia PDF Downloads 59
130 Reducing Hazardous Materials Releases from Railroad Freights through Dynamic Trip Plan Policy

Authors: Omar A. Abuobidalla, Mingyuan Chen, Satyaveer S. Chauhan

Abstract:

Railroad transportation of hazardous materials freights is important to the North America economics that supports the national’s supply chain. This paper introduces various extensions of the dynamic hazardous materials trip plan problems. The problem captures most of the operational features of a real-world railroad transportations systems that dynamically initiates a set of blocks and assigns each shipment to a single block path or multiple block paths. The dynamic hazardous materials trip plan policies have distinguishing features that are integrating the blocking plan, and the block activation decisions. We also present a non-linear mixed integer programming formulation for each variant and present managerial insights based on a hypothetical railroad network. The computation results reveal that the dynamic car scheduling policies are not only able to take advantage of the capacity of the network but also capable of diminishing the population, and environment risks by rerouting the active blocks along the least risky train services without sacrificing the cost advantage of the railroad. The empirical results of this research illustrate that the issue of integrating the blocking plan, and the train makeup of the hazardous materials freights must receive closer attentions.

Keywords: dynamic car scheduling, planning and scheduling hazardous materials freights, airborne hazardous materials, gaussian plume model, integrated blocking and routing plans, box model

Procedia PDF Downloads 202
129 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials

Authors: Aleš Florian, Lenka Ševelová

Abstract:

Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.

Keywords: concrete, FEM, pavement, sensitivity, simulation

Procedia PDF Downloads 327
128 Virtual Prototyping of LED Chip Scale Packaging Using Computational Fluid Dynamic and Finite Element Method

Authors: R. C. Law, Shirley Kang, T. Y. Hin, M. Z. Abdullah

Abstract:

LED technology has been evolving aggressively in recent years from incandescent bulb during older days to as small as chip scale package. It will continue to stay bright in future. As such, there is tremendous pressure to stay competitive in the market by optimizing products to next level of performance and reliability with the shortest time to market. This changes the conventional way of product design and development to virtual prototyping by means of Computer Aided Engineering (CAE). It comprises of the deployment of Finite Element Method (FEM) and Computational Fluid Dynamic (CFD). FEM accelerates the investigation for early detection of failures such as crack, improve the thermal performance of system and enhance solder joint reliability. CFD helps to simulate the flow pattern of molding material as a function of different temperature, molding parameters settings to evaluate failures like voids and displacement. This paper will briefly discuss the procedures and applications of FEM in thermal stress, solder joint reliability and CFD of compression molding in LED CSP. Integration of virtual prototyping in product development had greatly reduced the time to market. Many successful achievements with minimized number of evaluation iterations required in the scope of material, process setting, and package architecture variant have been materialized with this approach.

Keywords: LED, chip scale packaging (CSP), computational fluid dynamic (CFD), virtual prototyping

Procedia PDF Downloads 283
127 The MTHFR C677T Polymorphism Screening: A Challenge in Recurrent Pregnancy Loss

Authors: Rim Frikha, Nouha Bouayed, Afifa Sellami, Nozha Chakroun, Salima Daoud, Leila Keskes, Tarek Rebai

Abstract:

Introduction: Recurrent pregnancy loss (RPL) defined as two or more pregnancy losses, is a serious clinical problem. Methylene-tetrahydro-folate-reductase (MTHFR) polymorphisms, commonly the variant C677T is recognized as an inherited thrombophilia which might affect embryonic development and pregnancy success and cause pregnancy complications as RPL. Material and Methods DNA was extracted from peripheral blood samples and PCR-RFLP was performed for the molecular diagnosis of the C677T MTHFR polymorphism among 70 patients (35 couples) with more than 2 fetal losses. Aims and Objective: The aim of this study is to determine the frequency of MTHFR C677T among Tunisian couples with RPL and to critically analyze the available literature on the importance of MTHFR polymorphism testing in the management of RPL. Result and comments: No C677T mutation was detected in the carriers of RPL. This result would be related to sample size and to different criteria (number of abortion), - The association between MTHFR polymorphisms and pregnancy complications has been reported but with controversial results. - A lack of evidence for MTHFR polymorphism testing previously recommended by ACMG (American College of Medical medicine). Our study highlights the importance of screening of MTHFR polymorphism since the real impact of such thrombotic molecular defect on the pregnancy outcome is evident. - Folic supplementation of these patients during pregnancy can prevent such complications and lead to a successful pregnancy outcome.

Keywords: methylenetetrahydrofolate reductase, C677T, recurrent pregnancy loss, genetic testing

Procedia PDF Downloads 301
126 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm

Procedia PDF Downloads 159
125 Perceiving Casual Speech: A Gating Experiment with French Listeners of L2 English

Authors: Naouel Zoghlami

Abstract:

Spoken-word recognition involves the simultaneous activation of potential word candidates which compete with each other for final correct recognition. In continuous speech, the activation-competition process gets more complicated due to speech reductions existing at word boundaries. Lexical processing is more difficult in L2 than in L1 because L2 listeners often lack phonetic, lexico-semantic, syntactic, and prosodic knowledge in the target language. In this study, we investigate the on-line lexical segmentation hypotheses that French listeners of L2 English form and then revise as subsequent perceptual evidence is revealed. Our purpose is to shed further light on the processes of L2 spoken-word recognition in context and better understand L2 listening difficulties through a comparison of skilled and unskilled reactions at the point where their working hypothesis is rejected. We use a variant of the gating experiment in which subjects transcribe an English sentence presented in increments of progressively greater duration. The spoken sentence was “And this amazing athlete has just broken another world record”, chosen mainly because it included common reductions and phonetic features in English, such as elision and assimilation. Our preliminary results show that there is an important difference in the manner in which proficient and less-proficient L2 listeners handle connected speech. Less-proficient listeners delay recognition of words as they wait for lexical and syntactic evidence to appear in the gates. Further statistical results are currently being undertaken.

Keywords: gating paradigm, spoken word recognition, online lexical segmentation, L2 listening

Procedia PDF Downloads 458
124 On the Relationship between the Concepts of "[New] Social Democracy" and "Democratic Socialism"

Authors: Gintaras Mitrulevičius

Abstract:

This text, which is based on the conference report, seeks to briefly examine the relationship between the concepts of social democracy and democratic socialism, drawing attention to the essential aspects of its development and, in particular, discussing the contradictions in the relationship between these concepts in the modern period. In the preparation of this text, such research methods as historical, historical-comparative methods were used, as well as methods of analyzing, synthesizing, and generalizing texts. The history of the use of terms in social democracy and democratic socialism shows that these terms were used alternately and almost synonymously. At the end of the 20th century, traditional social democracy was transformed into the so-called "new social democracy." Many of the new social democrats do not consider themselves democratic socialists and avoid the historically characteristic identification of social democracy with democratic socialism. It has become quite popular to believe that social democracy is a separate ideology from democratic socialism. Or that it has become a variant of the ideology of liberalism. This is a testimony to the crisis of ideological self-awareness of social democracy. Since the beginning of the 21st century, social democracy has also experienced a growing crisis of electoral support. This, among other things, led to her slight shift to the left. In this context, some social democrats are once again talking about democratic socialism. The rise of the ideas of democratic socialism in the United States was catalyzed by Bernie Sanders. But the proponents of democratic socialism in the United States have different concepts of democratic socialism. In modern Europe, democratic socialism is also spoken of by leftists of non-social democratic origin, whose understanding is different from that of democratic socialism inherent in classical social democracy. Some political scientists also single out the concepts in question. Analysis of the problem shows that there are currently several concepts of democratic socialism on the spectrum of the political left, both social-democratic and non-social-democratic.

Keywords: democratic socializm, socializm, social democracy, new social democracy, political ideologies

Procedia PDF Downloads 109
123 A Comparative Analysis Of Da’wah Methodology Applied by the Two Variant Factions of Jama’atu Izalatil Bid’ah Wa-Iqamatis Sunnah in Nigeria

Authors: Aminu Alhaji Bala

Abstract:

The Jama’atu Izalatil Bid’ah Wa-Iqamatis Sunnah is a Da’wah organization and reform movement launched in Jos - Nigeria in 1978 as a purely reform movement under the leadership of late Shaykh Ismai’la Idris. The organization started a full fledge preaching sessions at National, State and Local Government levels immediately after its formation. The contributions of this organization to da'wah activities in Nigeria are paramount. The organization conducted its preaching under the council of preaching with the help of the executives, elders and patrons of the movement. Teaching and preaching have been recognized as the major programs of the society. Its preaching activities are conducted from ward, local, state and national levels throughout the states of Nigeria and beyond. It also engaged itself in establishing Mosques, schools and offers sermons during Friday congregation and Eid days throughout its mosques where its sermon is translated into vernacular language, this attracted many Muslims who don’t understand Arabic to patronize the its activities. The organization however split into two faction due to different approaches to Da’wah methodology and some seemingly selfish interests among its leaders. It is upon this background that this research was conducted using analytical method to compare and contrast the da’wah methodology applied by the two factions of the organization. The research discussed about the formation, Da’wah activities of the organization. It also compared and contrast the Da’wah approach and methodology of the two factions. The research finding reveals that different approach and methods applied by these factions is one of the main reason of their split in addition to other selfish interest among its leaders.

Keywords: activities, Da’wah, methodology, organization

Procedia PDF Downloads 211
122 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis

Authors: Indira Kothalanka

Abstract:

For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.

Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT

Procedia PDF Downloads 437
121 Approach for Updating a Digital Factory Model by Photogrammetry

Authors: R. Hellmuth, F. Wehner

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Short-term rescheduling can no longer be handled by on-site inspections and manual measurements. The tight time schedules require up-to-date planning models. Due to the high adaptation rate of factories described above, a methodology for rescheduling factories on the basis of a modern digital factory twin is conceived and designed for practical application in factory restructuring projects. The focus is on rebuild processes. The aim is to keep the planning basis (digital factory model) for conversions within a factory up to date. This requires the application of a methodology that reduces the deficits of existing approaches. The aim is to show how a digital factory model can be kept up to date during ongoing factory operation. A method based on photogrammetry technology is presented. The focus is on developing a simple and cost-effective solution to track the many changes that occur in a factory building during operation. The method is preceded by a hardware and software comparison to identify the most economical and fastest variant. 

Keywords: digital factory model, photogrammetry, factory planning, restructuring

Procedia PDF Downloads 109
120 Axial, Bending Interaction Diagrams of Reinforced Concrete Columns Exposed to Chloride Attack

Authors: Rita Greco, Giuseppe Carlo Marano

Abstract:

Chloride induced reinforcement corrosion is widely accepted to be the most frequent mechanism causing premature degradation of reinforced concrete members, whose economic and social consequences are growing up continuously. Prevention of these phenomena has a great importance in structural design, and modern Codes and Standard impose prescriptions concerning design details and concrete mix proportion for structures exposed to different external aggressive conditions, grouped in environmental classes. This paper focuses on reinforced concrete columns load carrying capacity degradation over time due to chloride induced steel pitting corrosion. The structural element is considered to be exposed to marine environment and the effects of corrosion are described by the time degradation of the axial-bending interaction diagram. Because chlorides ingress and consequent pitting corrosion propagation are both time-dependent mechanisms, the study adopts a time-variant predictive approach to evaluate the residual strength of corroded reinforced concrete columns at different lifetimes. Corrosion initiation and propagation process is modelled by taking into account all the parameters, such as external environmental conditions, concrete mix proportion, concrete cover and so on, which influence the time evolution of the corrosion phenomenon and its effects on the residual strength of RC columns.

Keywords: pitting corrosion, strength deterioration, diffusion coefficient, surface chloride concentration, concrete structures, marine environment

Procedia PDF Downloads 313
119 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 144
118 A Case Report of Aberrant Vascular Anatomy of the Deep Inferior Epigastric Artery Flap

Authors: Karissa Graham, Andrew Campbell-Lloyd

Abstract:

The deep inferior epigastric artery perforator flap (DIEP) is used to reconstruct large volumes of tissue. The DIEP flap is based on the deep inferior epigastric artery (DIEA) and vein. Accurate knowledge of the anatomy of these vessels allows for efficient dissection of the flap, minimal damage to surrounding tissue, and a well vascularized flap. A 54 year old lady was assessed for bilateral delayed autologous reconstruction with DIEP free flaps. The right DIEA was consistent with the described anatomy. The left DIEA had a vessel branching shortly after leaving the external iliac artery and before entering the muscle. This independent branch entered the muscle and had a long intramuscular course to the largest perforator. The main DIEA vessel demonstrated a type II branching pattern but had perforators that were too small to have a viable DIEP flap. There were no communicating arterial branches between the independent vessel and DIEA, however, there was one venous communication between them. A muscle sparing transverse rectus abdominis muscle flap was raised using the main periumbilical perforator from the independent vessel. Our case report demonstrated an unreported anatomical variant of the DIEA. A few anatomical variants have been described in the literature, including a unilateral absent DIEA and peritoneal-cutaneous perforators that had no connection to the DIEA. Doing a pre-operative CTA helps to identify these rare anatomical variations, which leads to safer, more efficient, and effective operating.

Keywords: aberrant anatomy, CT angiography, DIEP anatomy, free flap

Procedia PDF Downloads 128
117 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 197
116 Incorporating Morality Standards in eLearning Process at INU

Authors: Khader Musbah Titi

Abstract:

In this era, traditional education systems do not meet the new challenges created by emerging technologies. On the other hand, eLearning offers all the necessary tools to meet these challenges. Using the Internet has brought numerous benefits to most educational institutions; it has also stretched traditional problems of plagiarism, cheating, stealing, vandalism, and spying into the cyberspace. This research discusses these issues in an eLearning environment. It attempts to provide suggestions and possible solutions to some of these issues. The main aim of this research is to conduct a survey at Irbid National University (INU), one of the oldest and biggest universities in Jordan, to study information related to moral and ethical issues in e-learning environment that affect the construction of the students’ characters in the future. The study will focus on student’s behavior and actions through the Internet using Learning Management System (LMS). Another aim of this research is to analyze the opinions of the instructors and last year students at INU about ethical behavior and interaction through LMS. The results show that educational institutes that use LMS should focus on student character development along with field knowledge. According to disadvantages, the results of the study showed that most of students behave unethically in their online activities (cheating, plagiarism, copy/paste etc.) while studying online courses through LMS. The result showed that instructors play a major role in the character development of students. The result also showed that academic institute must have variant mechanisms and strict policy in LMS to control unethical actions of students.

Keywords: LMS, cyber ethics, e-learning, IT ethics, students’ behaviors

Procedia PDF Downloads 239
115 Apolipoprotein A1 -75 G to a Substitution and Its Relationship with Serum ApoA1 Levels among Indian Punjabi Population

Authors: Savjot Kaur, Mridula Mahajan, AJS Bhanwer, Santokh Singh, Kawaljit Matharoo

Abstract:

Background: Disorders of lipid metabolism and genetic predisposition are CAD risk factors. ApoA1 is the apolipoprotein component of anti-atherogenic high density lipoprotein (HDL) particles. The protective action of HDL and ApoA1 is attributed to their central role in reverse cholesterol transport (RCT). Aim: This study was aimed at identifying sequence variations in ApoA1 (-75G>A) and its association with serum ApoA1 levels. Methods: A total of 300 CAD patients and 300 Normal individuals (controls) were analyzed. PCR-RFLP method was used to determine the DNA polymorphism in the ApoA1 gene, PCR products digested with restriction enzyme MspI, followed by Agarose Gel Electrophoresis. Serum apolipoprotein A1 concentration was estimated with immunoturbidimetric method. Results: Deviation from Hardy- Weinberg Equilibrium (HWE) was observed for this gene variant. The A- allele frequency was higher among Coronary Artery disease patients (53.8) compared to controls (45.5), p= 0.004, O.R= 1.38(1.11-1.75). Under recessive model analysis (AA vs. GG+GA) AA genotype of ApoA1 G>A substitution conferred ~1 fold increased risk towards CAD susceptibility (p= 0.002, OR= 1.72(1.2-2.43). With serum ApoA1 levels < 107 A allele frequency was higher among CAD cases (50) as compared to controls (43.4) [p=0.23, OR= 1.2(0.84-2)] and there was zero % occurrence of A allele frequency in individuals with ApoA1 levels > 177. Conclusion: Serum ApoA1 levels were associated with ApoA1 promoter region variation and influence CAD risk. The individuals with the APOA1 -75 A allele confer excess hazard of developing CAD as a result of its effect on low serum concentrations of ApoA1.

Keywords: apolipoprotein A1 (G>A) gene polymorphism, coronary artery disease (CAD), reverse cholesterol transport (RCT)

Procedia PDF Downloads 311
114 A Gene Selection Algorithm for Microarray Cancer Classification Using an Improved Particle Swarm Optimization

Authors: Arfan Ali Nagra, Tariq Shahzad, Meshal Alharbi, Khalid Masood Khan, Muhammad Mugees Asif, Taher M. Ghazal, Khmaies Ouahada

Abstract:

Gene selection is an essential step for the classification of microarray cancer data. Gene expression cancer data (DNA microarray) facilitates computing the robust and concurrent expression of various genes. Particle swarm optimization (PSO) requires simple operators and less number of parameters for tuning the model in gene selection. The selection of a prognostic gene with small redundancy is a great challenge for the researcher as there are a few complications in PSO based selection method. In this research, a new variant of PSO (Self-inertia weight adaptive PSO) has been proposed. In the proposed algorithm, SIW-APSO-ELM is explored to achieve gene selection prediction accuracies. This new algorithm balances the exploration capabilities of the improved inertia weight adaptive particle swarm optimization and the exploitation. The self-inertia weight adaptive particle swarm optimization (SIW-APSO) is used to search the solution. The SIW-APSO is updated with an evolutionary process in such a way that each particle iteratively improves its velocities and positions. The extreme learning machine (ELM) has been designed for the selection procedure. The proposed method has been to identify a number of genes in the cancer dataset. The classification algorithm contains ELM, K- centroid nearest neighbor (KCNN), and support vector machine (SVM) to attain high forecast accuracy as compared to the start-of-the-art methods on microarray cancer datasets that show the effectiveness of the proposed method.

Keywords: microarray cancer, improved PSO, ELM, SVM, evolutionary algorithms

Procedia PDF Downloads 80
113 Multiple Variations of the Nerves of Gluteal Region and Their Clinical Implications, a Case Report

Authors: A. M. Prasad

Abstract:

Knowledge of variations of nerves of gluteal region is important for clinicians administering intramuscular injections, for orthopedic surgeons dealing with the hip surgeries, possibly for physiotherapists managing the painful conditions and paralysis of this region. Herein, we report multiple variations of the nerves of gluteal region. In the current case, the sciatic nerve was absent. The common peroneal and tibial nerves arose from sacral plexus and reached the gluteal region through greater sciatic foramen above and below piriformis respectively. The common peroneal nerve gave a muscular branch to the gluteus maximus. The inferior gluteal nerve and posterior cutaneous nerve of the thigh arose from a common trunk. The common trunk was formed by three roots. Upper and middle roots arose from sacral plexus and entered gluteal region through greater sciatic foramen respectively above and below piriformis. The lower root arose from the pudendal nerve and joined the common trunk. These variations were seen in the right gluteal region of an adult male cadaver aged approximately 70 years. Innervation of gluteus maximus by common peroneal nerve and presence of a common trunk of inferior gluteal nerve and posterior cutaneous nerve of the thigh make this case unique. The variant nerves may be subjected to iatrogenic injuries during surgical approach to the hip. They may also get compressed if there is a hypertrophy of the piriformis syndrome. Hence, the knowledge of these variations is of importance to clinicians, orthopedic surgeons and possibly for physiotherapists.

Keywords: gluteal region, multiple variations, nerve injury, sciatic nerve

Procedia PDF Downloads 339
112 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 310
111 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 416
110 Aesthetic Preference and Consciousness in African Theatre: A Performance Appraisal of Tyrone Terrence's a Husband's Wife

Authors: Oluwatayo Isijola

Abstract:

The destructive influence of Europe on Africa has also taken a tow on the aesthetic essence of the African Art, which centres on morality and value for human life. In a parallel vein, the adverse turn of this influence on the dramaturgy of some contemporary African plays, poses impedance to audience consciousness in performance engagements. Through the spectrum of African Aesthetics, this study attempts a performance appraisal of A Husband’s wife; an unpublished play written by Tyrone Terence for the African audience. The researcher proffers two variant textual interpretations of the play to evaluate performance engagement in its default realistic mode, which holds an unresolved 'Medean-impulse', and another wherein the resolution is treated to a paradigm shift for aesthetic preference. The investigation employs the mixed method, which combines the quantitative and qualitative methodologies. Keen observation on the reactions and responses of audience members that were engaged in both performances, and on-the-spot interview with selected audience members, were the primary sources for the qualitative data. However, quantitative data was captured in an on-the-spot survey with the instrument of the questionnaire served to a sample population of the audience. The study observes that the preference for African aesthetics as exemplified in the second performance which deployed a paradigm shift did enhance audience consciousness. Hinging on performance aesthetic theory, the paper recommends that all such African plays bestowed with the shortcoming of African aesthetics, should be appropriately treated to paradigm shifts for performance engagement, in the interest of enhancing audience consciousness in the Nigerian Theatre.

Keywords: African aesthetics, audience consciousness, paradigm shift, median-impulse

Procedia PDF Downloads 326
109 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology

Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon

Abstract:

There is not much effective guideline on development of design parameters selection on springback for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for springback in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in U-channel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on springback of flange angle (β2) and wall opening angle (β1), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the springback behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for springback was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental values

Keywords: advance high strength steel, u-channel process, springback, design of experiment, optimization, response surface methodology (rsm)

Procedia PDF Downloads 538
108 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: assignment, deadline, greedy approach, Hungarian algorithm, operations research, scheduling

Procedia PDF Downloads 141
107 Distribution of Gamma-Radiation Levels in Core Sediment Samples in Gulf of İzmir, Eastern Aegean Sea, Turkey

Authors: D. Kurt, İ. F. Barut, Z. Ü. Yümün, E. Kam

Abstract:

After development of the industrial revolution, industrial plants and settlements have spread widely on the sea coasts. This concentration also brings environmental pollution in the sea. This study focuses on the Gulf of İzmir where is located in West of Turkey and it is a fascinating natural gulf of the Eastern Aegean Sea. Investigating marine current sediment is extremely important to detect pollution. Natural radionuclides’ pollution of the marine environment which is also known as a significant environmental anxiety. Ground drilling cores (the depth of each sediment is variant) were collected from the Gulf of İzmir’s four different locations which were Karşıyaka, İnciraltı, Çeşmealtı and Bayraklı. These sediment cores were put in preserving bags with weight around 1 kg, and they were dried at room temperature in a week for moisture removal. Then, they were sieved with 1 mm sieve holes, and finally these powdered samples were relocation to polyethylene Marinelli beakers of 100 ml versions. Each prepared sediment was waited to reach radioactive equilibrium between uranium and thorium for 40 days. Gamma spectrometry measurements were settled using a HPG (High- Purity Germanium) semiconductor detector. Semiconductor detectors are very good at separating power of the energy, they are easily able to differentiate peaks that are pretty close to each other. That is why, gamma spectroscopy’s usage is common for the determination of the activities of U - 238, Th - 232, Ra - 226, Cr - 137 and K - 40 in Bq kg⁻¹. In this study, the results display that the average concentrations of activities’ values are in respectively; 2.2 ± 1.5 Bq/ kg⁻¹, 0.98 ± 0.02 Bq/ kg⁻¹, 8 ± 0.96 Bq/ kg⁻¹, 0.93 ± 0.14 Bq/ kg⁻¹, and 76.05 ± 0.93 Bq/ kg⁻¹. The outcomes of the study are able to be used as a criterion for forthcoming research and the obtained data would be pragmatic for radiological mapping of the precise areas.

Keywords: gamma, Gulf of İzmir (Eastern Aegean Sea-Turkey), natural radionuclides, pollution

Procedia PDF Downloads 253
106 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 371
105 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis

Abstract:

This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.

Keywords: DC-DC converter, hysteresis modulation, parallel multi-cells converter, pulse-width modulation, robustness, sliding mode control

Procedia PDF Downloads 165
104 Effect of CYP2B6 c.516G>T and c.983T>C Single Nucleotide Polymorphisms on Plasma Nevirapine Levels in Zimbabwean HIV/AIDS Patients

Authors: Doreen Duri, Danai Zhou, Babil Stray-Pedersen, Collet Dandara

Abstract:

Given the high prevalence of HIV/AIDS in sub-Saharan Africa, and the elusive search for a cure, understanding the pharmacogenetics of currently used drugs is critical in populations from the most affected regions. Compared to Asian and Caucasian populations, African population groups are more genetically diverse, making it difficult to extrapolate findings from one ethnic group to another. This study aimed to investigate the role of genetic variation in CYP2B6 (c.516G>T and c.983T>C) single nucleotide polymorphisms on plasma nevirapine levels among HIV-infected adult Zimbabwean patients. Using a cross-sectional study, patients on nevirapine-containing HAART, having reached steady state (more than six weeks on treatment) were recruited to participate. Blood samples were collected after patients provided consent and samples were used to extract DNA for genetic analysis or to measure plasma nevirapine levels. Genetic analysis was carried out using PCR and RFLP or Snapshot for the two single nucleotide polymorphisms; CYP2B6 c.516G>T and c.983T>C, while LC-MS/MS was used in analyzing nevirapine concentration. CYP2B6 c.516G>T and c.983T>C significantly predicted plasma nevirapine concentration with the c.516T and c.983T being associated with elevated plasma nevirapine concentrations. Comparisons of the variant allele frequencies observed in this group to those reported in some African, Caucasian and Asian populations showed significant differences. We conclude that pharmacogenetics of nevirapine can be creatively used to determine patients who are likely to develop nevirapine-associated side effects as well as too low plasma concentrations for viral suppression.

Keywords: allele frequencies, genetically diverse, nevirapine, single nucleotide polymorphism

Procedia PDF Downloads 451
103 Allele Mining for Rice Sheath Blight Resistance by Whole-Genome Association Mapping in a Tail-End Population

Authors: Naoki Yamamoto, Hidenobu Ozaki, Taiichiro Ookawa, Youming Liu, Kazunori Okada, Aiping Zheng

Abstract:

Rice sheath blight is one of the destructive fungal diseases in rice. We have thought that rice sheath blight resistance is a polygenic trait. Host-pathogen interactions and secondary metabolites such as lignin and phytoalexins are likely to be involved in defense against R. solani. However, to our knowledge, it is still unknown how sheath blight resistance can be enhanced in rice breeding. To seek for an alternative genetic factor that contribute to sheath blight resistance, we mined relevant allelic variations from rice core collections created in Japan. Based on disease lesion length on detached leaf sheath, we selected 30 varieties of the top tail-end and the bottom tail-end, respectively, from the core collections to perform genome-wide association mapping. Re-sequencing reads for these varieties were used for calling single nucleotide polymorphisms among the 60 varieties to create a SNP panel, which contained 1,137,131 homozygous variant sites after filitering. Association mapping highlighted a locus on the long arm of chromosome 11, which is co-localized with three sheath blight QTLs, qShB11-2-TX, qShB11, and qSBR-11-2. Based on the localization of the trait-associated alleles, we identified an ankyryn repeat-containing protein gene (ANK-M) as an uncharacterized candidate factor for rice sheath blight resistance. Allelic distributions for ANK-M in the whole rice population supported the reliability of trait-allele associations. Gene expression characteristics were checked to evaluiate the functionality of ANK-M. Since an ANK-M homolog (OsPIANK1) in rice seems a basal defense regulator against rice blast and bacterial leaf blight, ANK-M may also play a role in the rice immune system.

Keywords: allele mining, GWAS, QTL, rice sheath blight

Procedia PDF Downloads 73