Search results for: latent tuberculosis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 580

Search results for: latent tuberculosis

100 The Hierarchical Model of Fitness Services Quality Perception in Serbia

Authors: Mirjana Ilic, Dragan Zivotic, Aleksandra Perovic, Predrag Gavrilovic

Abstract:

The service quality perception depends on many factors, such as the area in which the services are provided, socioeconomic status, educational status, experience, age and gender of consumers, as well as many others. For this reason, it is not possible to apply instrument for establishing the service quality perception that is developed in other areas and in other populations. The aim of the research was to form an instrument for assessing the quality perception in the field of fitness in Serbia. After analyzing the available literature and conducting a pilot research, there were 15 isolated areas in which it was possible to observe the service quality perception. The areas included: material and technical basis, secondary facilities, coaches, programs, reliability, credibility, security, rapid response, compassion, communication, prices, satisfaction, loyalty, quality outcomes and motives. These areas were covered by a questionnaire consisted of 100 items where the number of items varied from area to area from 3 up to 11. The questionnaire was administered to 350 subjects of both genders (174 men and 176 women) aged from 18 to 68 years, being beneficiaries of fitness services for at least 1 year. In each of the areas was conducted a factor analysis in its exploratory form by principal components method. The number of significant factors has been determined in accordance with the Kaiser Guttman criterion. The initial factor solutions were simplified using the Varimax rotation. Analyses per areas have produced from 1 to 4 factors. Afterward, the factor analysis of factor scores on the first principal component of each of the respondents in each of the analyzed area was performed, and the factor structure was obtained with four latent dimensions interpreted as offer, the relationship with the coaches, the experience of quality and the initial impression. This factor structure was analysed by hierarchical analysis of Oblique factors, which in the second order space produced single factor interpreted as a general factor of the service quality perception. The resulting questionnaire represents an instrument which can serve managers in the field of fitness to optimize the centers development, raising the quality of services in line with consumers needs and expectations.

Keywords: fitness, hierarchical model, quality perception, factor analysis

Procedia PDF Downloads 301
99 Forecasting Future Society to Explore Promising Security Technologies

Authors: Jeonghwan Jeon, Mintak Han, Youngjun Kim

Abstract:

Due to the rapid development of information and communication technology (ICT), a substantial transformation is currently happening in the society. As the range of intelligent technologies and services is continuously expanding, ‘things’ are becoming capable of communicating one another and even with people. However, such “Internet of Things” has the technical weakness so that a great amount of such information transferred in real-time may be widely exposed to the threat of security. User’s personal data are a typical example which is faced with a serious security threat. The threats of security will be diversified and arose more frequently because next generation of unfamiliar technology develops. Moreover, as the society is becoming increasingly complex, security vulnerability will be increased as well. In the existing literature, a considerable number of private and public reports that forecast future society have been published as a precedent step of the selection of future technology and the establishment of strategies for competitiveness. Although there are previous studies that forecast security technology, they have focused only on technical issues and overlooked the interrelationships between security technology and social factors are. Therefore, investigations of security threats in the future and security technology that is able to protect people from various threats are required. In response, this study aims to derive potential security threats associated with the development of technology and to explore the security technology that can protect against them. To do this, first of all, private and public reports that forecast future and online documents from technology-related communities are collected. By analyzing the data, future issues are extracted and categorized in terms of STEEP (Society, Technology, Economy, Environment, and Politics), as well as security. Second, the components of potential security threats are developed based on classified future issues. Then, points that the security threats may occur –for example, mobile payment system based on a finger scan technology– are identified. Lastly, alternatives that prevent potential security threats are proposed by matching security threats with points and investigating related security technologies from patent data. Proposed approach can identify the ICT-related latent security menaces and provide the guidelines in the ‘problem – alternative’ form by linking the threat point with security technologies.

Keywords: future society, information and communication technology, security technology, technology forecasting

Procedia PDF Downloads 455
98 Prevalence of Seropositivity for Cytomegalovirus in Patients with Hereditary Bleeding Diseases in West Azerbaijan of Iran

Authors: Zakieh Rostamzadeh, Zahra Shirmohammadi

Abstract:

Human cytomegalovirus is a species of the cytomegalovirus family of viruses, which in turn is a member of the viral family known as herpesviridae or herpesviruses. Although they may be found throughout the body, HCMV infections are frequently associated with the salivary glands. HCMV infection is typically unnoticed in healthy people, but can be life-threatening for the immunocompromised such as HIV-infected persons, organ transplant recipients, or newborn infants. After infection, HCMV has an ability to remain latent within the body over long periods. Cytomegalovirus (CMV) causes infection in immunocompromised, hemophilia patients and those who received blood transfusion frequently. This study aimed at determining the prevalence of cytomegalovirus (CMV) antibodies in hemophilia patients. Materials and Methods: A retrospective observational study was carried out in Urmia, North West of Iran. The study population comprised a sample of 50 hemophilic patients born after 1985 and have received blood factors in West Azerbaijan. The exclusion criteria include: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling. All samples were evaluated with the method of ELISA, with a certain kind of kit and by a certain laboratory. Results: Fifty hemophiliacs from 250 patients registered with Urmia Hemophilia Society were enrolled in the study including 43 (86%) male, and 7 (14%) female. The mean age of patients was 10.3 years, range 3 to 25 years. None of patients had risk factors mentioned above. Among our studied population, 34(68%) had hemophilia A, 1 (2%) hemophilia B, 8 (16%) VWF, 3(6%) factor VII deficiency, 1 (2%) factor V deficiency, 1 (2%) factor X deficiency, 1 (2%). Sera of 50 Hemodialysis patients were investigated for CMV-specific immunoglobulin G (IgG) and IgM. % 91.89 patients were anti-CMV IgG positive and %40.54 was seropositive for anti-CMV IgM. 37.8% patient had serological evidence of reactivation and 2.7% of patients had the primary infection. Discussion: There was no relationship between the antibody titer and: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling.

Keywords: bioinformatics, biomedicine, cytomegalovirus, immunocompromise

Procedia PDF Downloads 349
97 Computational Team Dynamics in Student New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.

Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams

Procedia PDF Downloads 98
96 Developing Novel Bacterial Primase (DnaG) Inhibitors

Authors: Shanakr Bhattarai, V. S. Tiwari, Barak Akabayov

Abstract:

The plummeting number of infections and death is due to the development of drug-resistant bacteria. In addition, the number of approved antibiotic drugs by the Food and Drug Administration (FDA) is insufficient. Therefore, developing new drugs and finding novel targets for central metabolic pathways in bacteria is urgently needed. One of the promising targets is DNA replication machinery which consists of many essential proteins and enzymes. DnaG primase is an essential enzyme and a central part of the DNA replication machinery. DnaG primase synthesizes short RNA primers that initiate the Okazaki fragments by the lagging strand DNA polymerase. Therefore, it is reasonable to assume that inhibition of primase activity will stall DNA replication and prevent bacterial proliferation. We did the expression and purification of eight different bacterial DnaGs (Mycobacterium tuberculosis(Mtb), Bacillus anthracis (Ba), Mycobacterium smegmatis (Msmeg), Francisella tularencis (Ft), Vibrio cholerae (Vc) and Yersinia pestis (Yp), Staphylococcus aureus(Saureus), Escherichia coli(Ecoli)) followed by the radioactive activity assay. After obtaining the pure and active protein DnaG, we synthesized the inhibitors for them. The inhibitors were divided into five different groups, each containing five molecules, and the cocktail inhibition assay was performed against each DnaGs. The groups of molecules inhibiting the DnaGs were further tested with individual molecules belonging to inhibiting groups. Each molecule showing inhibition was titrated against the corresponding DnaGs to find IC50. We got a molecule(VS167) that acted as broad inhibitors, inhibiting all eight DnaGs. Molecules VS180 and VS186 inhibited seven DnaGs (except Saureus). Similarly, two molecules(VS 173, VS176) inhibited five DnaGs (Mtb, Ba, Ft, Yp, Ecoli). VS261 inhibited four DnaGs (Mtb, Ba, Ft, Vc). MS50 inhibited Ba and Vc DnaGs. And some of the inhibitors inhibited only one DnaGs. Thus we found the broad and specific inhibitors for different bacterial DnaGs, and their Structure-activity analysis(SAR) was done. Further, We tried to explain the similarities among the enzyme DnaGs from different bacteria based on their inhibition pattern.

Keywords: DNA replication, DnaG, okazaki fragments, antibiotic drugs

Procedia PDF Downloads 83
95 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively

Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus

Abstract:

Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.

Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment

Procedia PDF Downloads 78
94 Design, Construction and Evaluation of a Mechanical Vapor Compression Distillation System for Wastewater Treatment in a Poultry Company

Authors: Juan S. Vera, Miguel A. Gomez, Omar Gelvez

Abstract:

Water is Earth's most valuable resource, and the lack of it is currently a critical problem in today’s society. Non-treated wastewaters contribute to this situation, especially those coming from industrial activities, as they reduce the quality of the water bodies, annihilating all kind of life and bringing disease to people in contact with them. An effective solution for this problem is distillation, which removes most contaminants. However, this approach must also be energetically efficient in order to appeal to the industry. In this endeavour, most water distillation treatments fail, with the exception of the Mechanical Vapor Compression (MVC) distillation system, which has a great efficiency due to energy input by a compressor and the latent heat exchange. This paper presents the process of design, construction, and evaluation of a Mechanical Vapor Compression (MVC) distillation system for the main Colombian poultry company Avidesa Macpollo SA. The system will be located in the principal slaughterhouse in the state of Santander, and it will work along with the Gas Energy Mixing system (GEM) to treat the wastewaters from the plant. The main goal of the MVC distiller, rarely used in this type of application, is to reduce the chlorides, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD) levels according to the state regulations since the GEM cannot decrease them enough. The MVC distillation system works with three components, the evaporator/condenser heat exchanger where the distillation takes place, a low-pressure compressor which gives the energy to create the temperature differential between the evaporator and condenser cavities and a preheater to save the remaining energy in the distillate. The model equations used to describe how the compressor power consumption, heat exchange area and distilled water are related is based on a thermodynamic balance and heat transfer analysis, with correlations taken from the literature. Finally, the design calculations and the measurements of the installation are compared, showing accordance with the predictions in distillate production and power consumption, changing the temperature difference of the evaporator/condenser.

Keywords: mechanical vapor compression, distillation, wastewater, design, construction, evaluation

Procedia PDF Downloads 152
93 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 199
92 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 249
91 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 37
90 Identifying Environmental Adaptive Genetic Loci in Caloteropis Procera (Estabragh): Population Genetics and Landscape Genetic Analyses

Authors: Masoud Sheidaei, Mohammad-Reza Kordasti, Fahimeh Koohdar

Abstract:

Calotropis procera (Aiton) W.T.Aiton, (Apocynaceae), is an economically and medicinally important plant species which is an evergreen, perennial shrub growing in arid and semi-arid climates, and can tolerate very low annual rainfall (150 mm) and a dry season. The plant can also tolerate temperature ran off 20 to30°C and is not frost tolerant. This plant species prefers free-draining sandy soils but can grow also in alkaline and saline soils.It is found at a range of altitudes from exposed coastal sites to medium elevations up to 1300 m. Due to morpho-physiological adaptations of C. procera and its ability to tolerate various abiotic stresses. This taxa can compete with desirable pasture species and forms dense thickets that interfere with stock management, particularly mustering activities. Caloteropis procera grows only in southern part of Iran where in comprises a limited number of geographical populations. We used different population genetics and r landscape analysis to produce data on geographical populations of C. procera based on molecular genetic study using SCoT molecular markers. First, we used spatial principal components (sPCA), as it can analyze data in a reduced space and can be used for co-dominant markers as well as presence / absence data as is the case in SCoT molecular markers. This method also carries out Moran I and Mantel tests to reveal spatial autocorrelation and test for the occurrence of Isolation by distance (IBD). We also performed Random Forest analysis to identify the importance of spatial and geographical variables on genetic diversity. Moreover, we used both RDA (Redundency analysis), and LFMM (Latent factor mixed model), to identify the genetic loci significantly associated with geographical variables. A niche modellng analysis was carried our to predict present potential area for distribution of these plants and also the area present by the year 2050. The results obtained will be discussed in this paper.

Keywords: population genetics, landscape genetic, Calotreropis procera, niche modeling, SCoT markers

Procedia PDF Downloads 83
89 Protecting the Health of Astronauts: Enhancing Occupational Health Monitoring and Surveillance for Former NASA Astronauts to Understand Long-Term Outcomes of Spaceflight-Related Exposures

Authors: Meredith Rossi, Lesley Lee, Mary Wear, Mary Van Baalen, Bradley Rhodes

Abstract:

The astronaut community is unique, and may be disproportionately exposed to occupational hazards not commonly seen in other communities. The extent to which the demands of the astronaut occupation and exposure to spaceflight-related hazards affect the health of the astronaut population over the life course is not completely known. A better understanding of the individual, population, and mission impacts of astronaut occupational exposures is critical to providing clinical care, targeting occupational surveillance efforts, and planning for future space exploration. The ability to characterize the risk of latent health conditions is a significant component of this understanding. Provision of health screening services to active and former astronauts ensures individual, mission, and community health and safety. Currently, the NASA-Johnson Space Center (JSC) Flight Medicine Clinic (FMC) provides extensive medical monitoring to active astronauts throughout their careers. Upon retirement, astronauts may voluntarily return to the JSC FMC for an annual preventive exam. However, current retiree monitoring includes only selected screening tests, representing an opportunity for augmentation. The potential long-term health effects of spaceflight demand an expanded framework of testing for former astronauts. The need is two-fold: screening tests widely recommended for other aging populations are necessary to rule out conditions resulting from the natural aging process (e.g., colonoscopy, mammography); and expanded monitoring will increase NASA’s ability to better characterize conditions resulting from astronaut occupational exposures. To meet this need, NASA has begun an extensive exploration of the overall approach, cost, and policy implications of expanding the medical monitoring of former NASA astronauts under the Astronaut Occupational Health program. Increasing the breadth of monitoring services will ultimately enrich the existing evidence base of occupational health risks to astronauts. Such an expansion would therefore improve the understanding of the health of the astronaut population as a whole, and the ability to identify, mitigate, and manage such risks in preparation for deep space exploration missions.

Keywords: astronaut, long-term health, NASA, occupational health, surveillance

Procedia PDF Downloads 519
88 Exploration of Probiotics and Anti-Microbial Agents in Fermented Milk from Pakistani Camel spp. Breeds

Authors: Deeba N. Baig, Ateeqa Ijaz, Saloome Rafiq

Abstract:

Camel is a religious and culturally significant animal in Asian and African regions. In Pakistan Dromedary and Bactrian are common camel breeds. Other than the transportation use, it is a pivotal source of milk and meat. The quality of its milk and meat is predominantly dependent on the geographical location and variety of vegetation available for the diet. Camel milk (CM) is highly nutritious because of its reduced cholesterol and sugar contents along with enhanced minerals and vitamins level. The absence of beta-lactoglobulin (like human milk), makes CM a safer alternative for infants and children having Cow Milk Allergy (CMA). In addition to this, it has a unique probiotic profile both in raw and fermented form. Number of Lactic acid bacteria (LAB) including lactococcus, lactobacillus, enterococcus, streptococcus, weissella, pediococcus and many other bacteria have been detected. From these LAB Lactobacilli, Bifidobacterium and Enterococcus are widely used commercially for fermentation purpose. CM has high therapeutic value as its effectiveness is known against various ailments like fever, arthritis, asthma, gastritis, hepatitis, Jaundice, constipation, postpartum care of women, anti-venom, dropsy etc. It also has anti-diabetic, anti-microbial, antitumor potential along with its robust efficacy in the treatment of auto-immune disorders. Recently, the role of CM has been explored in brain-gut axis for the therapeutics of neurodevelopmental disorders. In this connection, a lot of grey area was available to explore the probiotics and therapeutics latent in the CM available in Pakistan. Thus, current study was designed to explore the predominant probiotic flora and antimicrobial potential of CM from different local breeds of Pakistan. The probiotics have been identified through biochemical, physiological and ribo-typing methods. In addition to this, bacteriocins (antimicrobial-agents) were screened through PCR-based approach. Results of this study revealed that CM from different breeds of camel depicted a number of similar probiotic candidates along with the range of limited variability. However, the nucleotide sequence analysis of selected anti-listerial bacteriocins exposed least variability. As a conclusion, the CM has sufficient probiotic availability and significant anti-microbial potential.

Keywords: bacteriocins, camel milk, probiotics potential, therapeutics

Procedia PDF Downloads 118
87 Properties of Ettringite According to Hydration, Dehydration and Carbonation Process

Authors: Bao Chen, Frederic Kuznik, Matthieu Horgnies, Kevyn Johannes, Vincent Morin, Edouard Gengembre

Abstract:

The contradiction between energy consumption, environment protection, and social development is increasingly intensified during recent decade years. At the same time, as avoiding fossil-fuels-thirsty, people turn their view on the renewable green energy, such as solar energy, wind power, hydropower, etc. However, due to the unavoidable mismatch on geography and time for production and consumption, energy storage seems to be one of the most reasonable solutions to enlarge the use of renewable energies. Thermal energy storage (TES), a branch of energy storage solution, mainly concerns the capture, storage and consumption of thermal energy for later use in different scales (individual house, apartment, district, and city). In TES research field, sensible heat and latent heat storage have been widely studied and presented at an advanced stage of development. Compared with them, thermochemical energy storage is still at initial phase but provides a relatively higher theoretical energy density and a long shelf life without heat dissipation during storage. Among thermochemical energy storage materials, inorganic pure or composite compounds like micro-porous silica gel, SrBr₂ hydrate and MgSO₄-Zeolithe have been reported as promising to be integrated into thermal energy storage systems. However, the cost of these materials, one of main obstacles, may hinder the wide use of energy storage systems in real application scales (individual house, apartment, district and even city). New studies on ettringite show promising application for thermal energy storage since its high energy density and large resource from cementitious materials. Ettringite, or calcium trisulfoaluminate hydrate, of which chemical formula is 3CaO∙Al₂O₃∙3CaSO₄∙32H₂O, or C₆AS̅₃H₃₂ as known in cement chemistry notation, is one of the most important members of AFt group. As a common compound in hydrated cements, ettringite has been widely studied for its performances in construction but barely known as a thermochemical material. For this study, we summarize available data about the structure and properties of ettringite and its metastable phase (meta-ettringite), including the processes of hydration, thermal conversion and carbonation durability for thermal energy storage.

Keywords: building materials, ettringite, meta-ettringite, thermal energy storage

Procedia PDF Downloads 202
86 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 156
85 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 142
84 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal

Authors: Anindita Patra, Prasad K. Bhaskaran

Abstract:

The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.

Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves

Procedia PDF Downloads 219
83 Inhibitory Effect of Coumaroyl Lupendioic Acid on Inflammation Mediator Generation in Complete Freund’s Adjuvant-Induced Arthritis

Authors: Rayhana Begum, Manju Sharma

Abstract:

Careya arborea Roxb. belongs to the Lecythidaceae family, is traditionally used in tumors, anthelmintic, bronchitis, epileptic fits, astringents, inflammation, an antidote to snake-venom, skin disease, diarrhea, dysentery with bloody stools, dyspepsia, ulcer, toothache, and ear pain. The present study was focused on investigating the anti-arthritic effect of coumaroyl lupendioic acid, a new lupane-type triterpene from Careya arborea stem bark in the chronic inflammatory model and further assessing its possible mechanism on the modulation of inflammatory biomarkers. Arthritis was induced by injecting 0.1 ml of Complete Freund’s Adjuvant (5 mg/ml of heat killed Mycobacterium tuberculosis) into the subplantar region of the left hind paw. Treatment with coumaroyl lupendioic acid (10 and 20 mg/kg, p.o.) and reference drugs (indomethacin and dexamethasone at the dose of 5 mg/kg, p.o.) were started on the day of induction and continued up to 28 days. The progression of arthritis was evaluated by measuring paw volume, tibio tarsal joint diameters, and arthritic index. The effect of coumaroyl lupendioic acid (CLA) on the production PGE₂, NO, MPO, NF-κB, TNF-α, IL-1β, and IL-6 on serum level as well as inflamed paw tissue were also assessed. In addition, ankle joints and spleen were collected and prepared for histological examination. CLA in inflamed rats resulted in significant amelioration of paw edema, tibio-tarsal joint swelling and arthritic score as compared to CFA control group. The results indicated that CLA treated groups markedly decreased the levels of inflammatory mediators (PGE₂, NO, MPO and NF-κB levels) and down-regulated the production of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) in paw tissue homogenates as well as in serum. However, the more pronounced effect was observed in the inflamed paw tissue homogenates. CLA also revealed a protective effect to the tibio-tarsal joint cartilage and spleen. These results suggest that coumaroyl lupendioic acid inhibits inflammation may be through the suppression of the cascade of proinflammatory mediators via the down-regulation of NF-ҡB.

Keywords: complete Freund’s adjuvant , Coumaroyl lupendioic acid, pro-inflammatory cytokines, prostaglandin E2

Procedia PDF Downloads 131
82 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic

Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry

Abstract:

BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.

Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks

Procedia PDF Downloads 121
81 Evaluation of Mito-Uncoupler Induced Hyper Metabolic and Aggressive Phenotype in Glioma Cells

Authors: Yogesh Rai, Saurabh Singh, Sanjay Pandey, Dhananjay K. Sah, B. G. Roy, B. S. Dwarakanath, Anant N. Bhatt

Abstract:

One of the most common signatures of highly malignant gliomas is their capacity to metabolize more glucose to lactic acid than normal brain tissues, even under normoxic conditions (Warburg effect), indicating that aerobic glycolysis is constitutively upregulated through stable genetic or epigenetic changes. However, oxidative phosphorylation (OxPhos) is also required to maintain the mitochondrial membrane potential for tumor cell survival. In the process of tumorigenesis, tumor cells during fastest growth rate exhibit both high glycolytic and high OxPhos. Therefore, metabolically reprogrammed cancer cells with combination of both aerobic glycolysis and altered OxPhos develop a robust metabolic phenotype, which confers a selective growth advantage. In our study, we grew the high glycolytic BMG-1 (glioma) cells with continuous exposure of mitochondrial uncoupler 2, 4, dinitro phenol (DNP) for 10 passages to obtain a phenotype of high glycolysis with enhanced altered OxPhos. We found that OxPhos modified BMG (OPMBMG) cells has similar growth rate and cell cycle distribution but high mitochondrial mass and functional enzymatic activity than parental cells. In in-vitro studies, OPMBMG cells showed enhanced invasion, proliferation and migration properties. Moreover, it also showed enhanced angiogenesis in matrigel plug assay. Xenografted tumors from OPMBMG cells showed reduced latent period, faster growth rate and nearly five folds reduction in the tumor take in nude mice compared to BMG-1 cells, suggesting that robust metabolic phenotype facilitates tumor formation and growth. OPMBMG cells which were found radio-resistant, showed enhanced radio-sensitization by 2-DG as compared to the parental BMG-1 cells. This study suggests that metabolic reprogramming in cancer cells enhances the potential of migration, invasion and proliferation. It also strengthens the cancer cells to escape the death processes, conferring resistance to therapeutic modalities. Our data also suggest that combining metabolic inhibitors like 2-DG with conventional therapeutic modalities can sensitize such metabolically aggressive cancer cells more than the therapies alone.

Keywords: 2-DG, BMG, DNP, OPM-BMG

Procedia PDF Downloads 216
80 Dimensions of Public Spaces in Indian Market Places Feelings through Human Senses

Authors: Piyush Hajela

Abstract:

Public spaces in Indian market places are vibrant, colorful and carry latent dimensions that make them attractive and popular gathering spaces. These markets satisfy the household needs of the people and also their social, cultural and traditional aspirations. Going to a market place for shopping in India is a great source of entertainment for the people. They would love to spend as much time as possible and stay for longer durations than otherwise required. It is this desire of the people that generates public spaces. Much of these public spaces emerge as squares, plazas, corners of varied shapes and sizes at different locations, and yet provide a conducive environment. Such public spaces grow organically and are discovered by the people themselves. Indian markets serve people of different culture, religion, caste, age, gender which keeps them alive all the year round. Indian is a diverse country and this diversity is reflected clearly in the market places. They hold the people together and promote harmony across cultures. Free access to these market places makes them magnets for social interaction. Public spaces are spread across a city and more or less have established their existence and prominence in a social set up. While few of them are created, others are discovered by the people themselves in their constant search for desirable interactive public spaces. These are the most sought after gathering spaces that have the quality of promoting social interaction, providing free accessibility, provide desirable scale etc. The paper aims at identifying these freely accessible public spaces and the dimensions within it that make these public spaces hold the people for significant duration of time. The dimensions present shall be judged through collective response of human senses in form of safety, comfort and so on through the expressions of the participants. The aim therefore would be to trace the freely accessible public spaces emerged in Indian markets and evaluate them for human response and behavior. The hierarchy of market places in the city of Bhopal is well established as, city center level, sub city-center level, community level, local and convenient level market places. While many city-centers are still referred to as the old or traditional or the core area of the city, the others are part of the planned city. These different levels of market places are studied for emerged public spaces. These emerged public spaces are then documented in detail for unveiling the dimensions they offer through, photographs, visual observations, questionnaires and response of the participants of these public spaces.

Keywords: human comfort, enclosure, safety, social interaction

Procedia PDF Downloads 403
79 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context

Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan

Abstract:

Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.

Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management

Procedia PDF Downloads 167
78 Implications of Human Cytomegalovirus as a Protective Factor in the Pathogenesis of Breast Cancer

Authors: Marissa Dallara, Amalia Ardeljan, Lexi Frankel, Nadia Obaed, Naureen Rashid, Omar Rashid

Abstract:

Human Cytomegalovirus (HCMV) is a ubiquitous virus that remains latent in approximately 60% of individuals in developed countries. Viral load is kept at a minimum due to a robust immune response that is produced in most individuals who remain asymptomatic. HCMV has been recently implicated in cancer research because it may impose oncomodulatory effects on tumor cells of which it infects, which could have an impact on the progression of cancer. HCMV has been implicated in increased pathogenicity of certain cancers such as gliomas, but in contrast, it can also exhibit anti-tumor activity. HCMV seropositivity has been recorded in tumor cells, but this may also have implications in decreased pathogenesis of certain forms of cancer such as leukemia as well as increased pathogenesis in others. This study aimed to investigate the correlation between cytomegalovirus and the incidence of breast cancer. Methods The data used in this project was extracted from a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to analyze the patients infected versus patients not infection with cytomegalovirus using ICD-10, ICD-9 codes. Permission to utilize the database was given by Holy Cross Health, Fort Lauderdale, for the purpose of academic research. Data analysis was conducted using standard statistical methods. Results The query was analyzed for dates ranging from January 2010 to December 2019, which resulted in 14,309 patients in both the infected and control groups, respectively. The two groups were matched by age range and CCI score. The incidence of breast cancer was 1.642% and 235 patients in the cytomegalovirus group compared to 4.752% and 680 patients in the control group. The difference was statistically significant by a p-value of less than 2.2x 10^-16 with an odds ratio of 0.43 (0.4 to 0.48) with a 95% confidence interval. Investigation into the effects of HCMV treatment modalities, including Valganciclovir, Cidofovir, and Foscarnet, on breast cancer in both groups was conducted, but the numbers were insufficient to yield any statistically significant correlations. Conclusion This study demonstrates a statistically significant correlation between cytomegalovirus and a reduced incidence of breast cancer. If HCMV can exert anti-tumor effects on breast cancer and inhibit growth, it could potentially be used to formulate immunotherapy that targets various types of breast cancer. Further evaluation is warranted to assess the implications of cytomegalovirus in reducing the incidence of breast cancer.

Keywords: human cytomegalovirus, breast cancer, immunotherapy, anti-tumor

Procedia PDF Downloads 194
77 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 131
76 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application

Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli

Abstract:

Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.

Keywords: atomization, desalination, flash evaporation, rotary bell atomizer

Procedia PDF Downloads 67
75 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 138
74 A Review of Type 2 Diabetes and Diabetes-Related Cardiovascular Disease in Zambia

Authors: Mwenya Mubanga, Sula Mazimba

Abstract:

Background: In Zambia, much of the focus on nutrition and health has been on reducing micronutrient deficiencies, wasting and underweight malnutrition and not on the rising global projections of trends in obesity and type 2 diabetes. The aim of this review was to identify and collate studies on the prevalence of obesity, diabetes and diabetes-related cardiovascular disease conducted in Zambia, to summarize their findings and to identify areas that need further research. Methods: The Medical Literature Analysis and Retrieval System (MEDLINE) database was searched for peer-reviewed articles on the prevalence of, and factors associated with obesity, type 2 diabetes, and diabetes-related cardiovascular disease amongst Zambian residents using a combination of search terms. The period of search was from 1 January 2000 to 31 December 2016. We expanded the search terms to include all possible synonyms and spellings obtained in the search strategy. Additionally, we performed a manual search for other articles and references of peer-reviewed articles. Results: In Zambia, the current prevalence of Obesity and Type 2 diabetes is estimated at 13%-16% and 2.0 – 3.0% respectively. Risk factors such as the adoption of western dietary habits, the social stigmatization associated with rapid weight loss due to Tuberculosis and/ or the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and rapid urbanization have all been blamed for fueling the increased risk of obesity and type 2 diabetes. However, unlike traditional Western populations, those with no formal education were less likely to be obese than those who attained secondary or tertiary level education. Approximately 30% of those surveyed were unaware of their diabetes diagnosis and more than 60% were not on treatment despite a known diabetic status. Socio-demographic factors such as older age, female sex, urban dwelling, lack of tobacco use and marital status were associated with an increased risk of obesity, impaired glucose tolerance and type 2 diabetes. We were unable to identify studies that specifically looked at diabetes-related cardiovascular disease. Conclusion: Although the prevalence of Obesity and Type 2 diabetes in Zambia appears low, more representative studies focusing on parts of the country outside of the main industrial zone need to be conducted. There also needs to be research on diabetes-related cardiovascular disease. National surveillance, monitoring and evaluation on all non-communicable diseases need to be prioritized and policies that address underweight, obesity and type 2 diabetes developed.

Keywords: type 2 diabetes, Zambia, obesity, cardiovascular disease

Procedia PDF Downloads 232
73 The Interaction of Lay Judges and Professional Judges in French, German and British Labour Courts

Authors: Susan Corby, Pete Burgess, Armin Hoeland, Helene Michel, Laurent Willemez

Abstract:

In German 1st instance labour courts, lay judges always sit with a professional judge and in British and French 1st instance labour courts, lay judges sometimes sit with a professional judge. The lay judges’ main contribution is their workplace knowledge, but they act in a juridical setting where legal norms prevail. Accordingly, the research question is: does the professional judge dominate the lay judges? The research, funded by the Hans-Böckler-Stiftung, is based on over 200 qualitative interviews conducted in France, Germany and Great Britain in 2016-17 with lay and professional judges. Each interview lasted an hour on average, was audio-recorded, transcribed and then analysed using MaxQDA. Status theories, which argue that external sources of (perceived) status are imported into the court, and complementary notions of informational advantage suggest professional judges might exercise domination and control. Furthermore, previous empirical research on British and German labour courts, now some 30 years old, found that professional judges dominated. More recent research on lay judges and professional judges in criminal courts also found professional judge domination. Our findings, however, are more nuanced and distinguish between the hearing and deliberations, and also between the attitudes of judges in the three countries. First, in Germany and Great Britain the professional judge has specialist knowledge and expertise in labour law. In contrast, French professional judges do not study employment law and may only seldom adjudicate on employment law cases. Second, although the professional judge chairs and controls the hearing when he/she sits with lay judges in all three countries, exceptionally in Great Britain lay judges have some latent power as they have to take notes systematically due to the lack of recording technology. Such notes can be material if a party complains of bias, or if there is an appeal. Third, as to labour court deliberations: in France, the professional judge alone determines the outcome of the case, but only if the lay judges have been unable to agree at a previous hearing, which only occurs in 20% of cases. In Great Britain and Germany, although the two lay judges and the professional judge have equal votes, the contribution of British lay judges’ workplace knowledge is less important than that of their German counterparts. British lay judges essentially only sit on discrimination cases where the law, the purview of the professional judge, is complex. They do not sit routinely on unfair dismissal cases where workplace practices are often a key factor in the decision. Also, British professional judges are less reliant on their lay judges than German professional judges. Whereas the latter are career judges, the former only become professional judges after having had several years’ experience in the law and many know, albeit indirectly through their clients, about a wide range of workplace practices. In conclusion, whether or if the professional judge dominates lay judges in labour courts varies by country, although this is mediated by the attitudes of the interactionists.

Keywords: cross-national comparisons, labour courts, professional judges, lay judges

Procedia PDF Downloads 284
72 Assessment of Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town Who Were Enrolled From 2011 to 2021

Authors: Getahun Nigusie Demise

Abstract:

Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: The aim of this study is to assess the incidence and predictors of mortality among HIV positive children on antiretroviral therapy (ART) in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.

Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, treatment, Ethiopia

Procedia PDF Downloads 28
71 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 123