Search results for: Approximate Bayesian computation(ABC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 638

Search results for: Approximate Bayesian computation(ABC)

128 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 463
127 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 385
126 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 505
125 Socio-Demographic Factors and Testing Practices Are Associated with Spatial Patterns of Clostridium difficile Infection in the Australian Capital Territory, 2004-2014

Authors: Aparna Lal, Ashwin Swaminathan, Teisa Holani

Abstract:

Background: Clostridium difficile infections (CDIs) have been on the rise globally. In Australia, rates of CDI in all States and Territories have increased significantly since mid-2011. Identifying risk factors for CDI in the community can help inform targeted interventions to reduce infection. Methods: We examine the role of neighbourhood socio-economic status, demography, testing practices and the number of residential aged care facilities on spatial patterns in CDI incidence in the Australian Capital Territory. Data on all tests conducted for CDI were obtained from ACT Pathology by postcode for the period 1st January 2004 through 31 December 2014. Distribution of age groups and the neighbourhood Index of Relative Socio-economic Advantage Disadvantage (IRSAD) were obtained from the Australian Bureau of Statistics 2011 National Census data. A Bayesian spatial conditional autoregressive model was fitted at the postcode level to quantify the relationship between CDI and socio-demographic factors. To identify CDI hotspots, exceedance probabilities were set at a threshold of twice the estimated relative risk. Results: CDI showed a positive spatial association with the number of tests (RR=1.01, 95% CI 1.00, 1.02) and the resident population over 65 years (RR=1.00, 95% CI 1.00, 1.01). The standardized index of relative socio-economic advantage disadvantage (IRSAD) was significantly negatively associated with CDI (RR=0.74, 95% CI 0.56, 0.94). We identified three postcodes with high probability (0.8-1.0) of excess risk. Conclusions: Here, we demonstrate geographic variations in CDI in the ACT with a positive association of CDI with socioeconomic disadvantage and identify areas with a high probability of elevated risk compared with surrounding communities. These findings highlight community-based risk factors for CDI.

Keywords: spatial, socio-demographic, infection, Clostridium difficile

Procedia PDF Downloads 318
124 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 623
123 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 157
122 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model

Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech

Abstract:

Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.

Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM

Procedia PDF Downloads 134
121 Environment Management Practices at Oil and Natural Gas Corporation Hazira Gas Processing Complex

Authors: Ashish Agarwal, Vaibhav Singh

Abstract:

Harmful emissions from oil and gas processing facilities have long remained a matter of concern for governments and environmentalists throughout the world. This paper analyses Oil and Natural Gas Corporation (ONGC) gas processing plant in Hazira, Gujarat, India. It is the largest gas-processing complex in the country designed to process 41MMSCMD sour natural gas & associated sour condensate. The complex, sprawling over an area of approximate 705 hectares is the mother plant for almost all industries at Hazira and enroute Hazira Bijapur Jagdishpur pipeline. Various sources of pollution from each unit starting from Gas Terminal to Dew Point Depression unit and Caustic Wash unit along the processing chain were examined with the help of different emission data obtained from ONGC. Pollution discharged to the environment was classified into Water, Air, Hazardous Waste and Solid (Non-Hazardous) Waste so as to analyze each one of them efficiently. To protect air environment, Sulphur recovery unit along with automatic ambient air quality monitoring stations, automatic stack monitoring stations among numerous practices were adopted. To protect water environment different effluent treatment plants were used with due emphasis on aquaculture of the nearby area. Hazira plant has obtained the authorization for handling and disposal of five types of hazardous waste. Most of the hazardous waste were sold to authorized recyclers and the rest was given to Gujarat Pollution Control Board authorized vendors. Non-Hazardous waste was also handled with an overall objective of zero negative impact on the environment. The effect of methods adopted is evident from emission data of the plant which was found to be well under Gujarat Pollution Control Board limits.

Keywords: sulphur recovery unit, effluent treatment plant, hazardous waste, sour gas

Procedia PDF Downloads 222
120 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution

Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan

Abstract:

The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.

Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection

Procedia PDF Downloads 412
119 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 370
118 Utilizing Fiber-Based Modeling to Explore the Presence of a Soft Storey in Masonry-Infilled Reinforced Concrete Structures

Authors: Akram Khelaifia, Salah Guettala, Nesreddine Djafar Henni, Rachid Chebili

Abstract:

Recent seismic events have underscored the significant influence of masonry infill walls on the resilience of structures. The irregular positioning of these walls exacerbates their adverse effects, resulting in substantial material and human losses. Research and post-earthquake evaluations emphasize the necessity of considering infill walls in both the design and assessment phases. This study delves into the presence of soft stories in reinforced concrete structures with infill walls. Employing an approximate method relying on pushover analysis results, fiber-section-based macro-modeling is utilized to simulate the behavior of infill walls. The findings shed light on the presence of soft first stories, revealing a notable 240% enhancement in resistance for weak column—strong beam-designed frames due to infill walls. Conversely, the effect is more moderate at 38% for strong column—weak beam-designed frames. Interestingly, the uniform distribution of infill walls throughout the structure's height does not influence soft-story emergence in the same seismic zone, irrespective of column-beam strength. In regions with low seismic intensity, infill walls dissipate energy, resulting in consistent seismic behavior regardless of column configuration. Despite column strength, structures with open-ground stories remain vulnerable to soft first-story emergence, underscoring the crucial role of infill walls in reinforced concrete structural design.

Keywords: masonry infill walls, soft Storey, pushover analysis, fiber section, macro-modeling

Procedia PDF Downloads 62
117 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes

Authors: Hamed K. Esfahani, Bithin Datta

Abstract:

Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.

Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites

Procedia PDF Downloads 274
116 Evaluating the Understanding of the University Students (Basic Sciences and Engineering) about the Numerical Representation of the Average Rate of Change

Authors: Saeid Haghjoo, Ebrahim Reyhani, Fahimeh Kolahdouz

Abstract:

The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.

Keywords: average rate of change, context problems, derivative, numerical representation, SOLO taxonomy

Procedia PDF Downloads 89
115 Introducing Two Species of Parastagonospora (Phaeosphaeriaceae) on Grasses from Italy and Russia, Based on Morphology and Phylogeny

Authors: Ishani D. Goonasekara, Erio Camporesi, Timur Bulgakov, Rungtiwa Phookamsak, Kevin D. Hyde

Abstract:

Phaeosphaeriaceae comprises a large number of species occurring mainly on grasses and cereal crops as endophytes, saprobes and especially pathogens. Parastagonospora is an important genus in Phaeosphaeriaceae that includes pathogens causing leaf and glume blotch on cereal crops. Currently, there are fifteen Parastagonospora species described, including both pathogens and saprobes. In this study, one sexual morph species and an asexual morph species, occurring as saprobes on members of Poaceae are introduced based on morphology and a combined molecular analysis of the LSU, SSU, ITS, and RPB2 gene sequence data. The sexual morph species Parastagonospora elymi was isolated from a Russian sample of Elymus repens, a grass commonly known as couch grass, and important for grazing animals, as a weed and used in traditional Austrian medicine. P. elymi is similar to the sexual morph of P. avenae in having cylindrical asci, bearing 8, overlapping biseriate, fusiform ascospores but can be distinguished by its subglobose to conical shaped, wider ascomata. In addition, no sheath was observed surrounding the ascospores. The asexual morph species was isolated from a specimen from Italy, on Dactylis glomerata, a commonly found grass distributed in temperate regions. It is introduced as Parastagonospora macrouniseptata, a coelomycete, and bears a close resemblance to P. allouniseptata and P. uniseptata in having globose to subglobose, pycnidial conidiomata and hyaline, cylindrical, 1-septate conidia. However, the new species could be distinguished in having much larger conidiomata. In the phylogenetic analysis which consisted of a maximum likelihood and Bayesian analysis P. elymi showed low bootstrap support, but well segregated from other strains within the Parastagonospora clade. P. neoallouniseptata formed a sister clade with P. allouniseptata with high statistical support.

Keywords: dothideomycetes, multi-gene analysis, Poaceae, saprobes, taxonomy

Procedia PDF Downloads 115
114 Assessment of Wastewater Reuse Potential for an Enamel Coating Industry

Authors: Guclu Insel, Efe Gumuslu, Gulten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tugba Olmez Hanci, Didem Okutman Tas, Fatos Germirli Babuna, Derya Firat Ertem, Okmen Yildirim, Ozge Erturan, Betul Kirci

Abstract:

In order to eliminate water scarcity problems, effective precautions must be taken. Growing competition for water is increasingly forcing facilities to tackle their own water scarcity problems. At this point, application of wastewater reclamation and reuse results in considerable economic advantageous. In this study, an enamel coating facility, which is one of the high water consumed facilities, is evaluated in terms of its wastewater reuse potential. Wastewater reclamation and reuse can be defined as one of the best available techniques for this sector. Hence, process and pollution profiles together with detailed characterization of segregated wastewater sources are appraised in a way to find out the recoverable effluent streams arising from enamel coating operations. Daily, 170 m3 of process water is required and 160 m3 of wastewater is generated. The segregated streams generated by two enamel coating processes are characterized in terms of conventional parameters. Relatively clean segregated wastewater streams (reusable wastewaters) are separately collected and experimental treatability studies are conducted on it. The results reflected that the reusable wastewater fraction has an approximate amount of 110 m3/day that accounts for 68% of the total wastewaters. The need for treatment applicable on reusable wastewaters is determined by considering water quality requirements of various operations and characterization of reusable wastewater streams. Ultra-filtration (UF), Nano-filtration (NF) and Reverse Osmosis (RO) membranes are subsequently applied on reusable effluent fraction. Adequate organic matter removal is not obtained with the mentioned treatment sequence.

Keywords: enamel coating, membrane, reuse, wastewater reclamation

Procedia PDF Downloads 325
113 Aristotelian Techniques of Communication Used by Current Affairs Talk Shows in Pakistan for Creating Dramatic Effect to Trigger Emotional Relevance

Authors: Shazia Anwer

Abstract:

The current TV Talk Shows, especially on domestic politics in Pakistan are following the Aristotelian techniques, including deductive reasoning, three modes of persuasion, and guidelines for communication. The application of “Approximate Truth is also seen when Talk Show presenters create doubts against political personalities or national issues. Mainstream media of Pakistan, being a key carrier of narrative construction for the sake of the primary function of national consensus on regional and extended public diplomacy, is failing the purpose. This paper has highlighted the Aristotelian communication methodology, its purposes and its limitations for a serious discussion, and its connection to the mistrust among the Pakistani population regarding fake or embedded, funded Information. Data has been collected from 3 Pakistani TV Talk Shows and their analysis has been made by applying the Aristotelian communication method to highlight the core issues. Paper has also elaborated that current media education is impaired in providing transparent techniques to train the future journalist for a meaningful, thought-provoking discussion. For this reason, this paper has given an overview of HEC’s (Higher Education Commission) graduate-level Mass Com Syllabus for Pakistani Universities. The idea of ethos, logos, and pathos are the main components of TV Talk Shows and as a result, the educated audience is lacking trust in the mainstream media, which eventually generating feelings of distrust and betrayal in the society because productions look like the genre of Drama instead of facts and analysis thus the line between Current Affairs shows and Infotainment has become blurred. In the last section, practical implication to improve meaningfulness and transparency in the TV Talk shows has been suggested by replacing the Aristotelian communication method with the cognitive semiotic communication approach.

Keywords: Aristotelian techniques of communication, current affairs talk shows, drama, Pakistan

Procedia PDF Downloads 202
112 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 320
111 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 172
110 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 198
109 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 101
108 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 72
107 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India

Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel

Abstract:

Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.

Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM

Procedia PDF Downloads 242
106 Effectiveness of Adrenal Venous Sampling in the Management of Primary Aldosteronism: Single Centered Cohort Study at a Tertiary Care Hospital in Sri Lanka

Authors: Balasooriya B. M. C. M., Sujeeva N., Thowfeek Z., Siddiqa Omo, Liyanagunawardana J. E., Jayawardana Saiu, Manathunga S. S., Katulanda G. W.

Abstract:

Introduction and objectives: Adrenal venous sampling (AVS) is the gold standard to discriminate unilateral primary aldosteronism (UPA) from bilateral disease (BPA). AVS is technically demanding and only performed in a limited number of centers worldwide. To the best of our knowledge, Except for one study conducted in India, no other research studies on this area have been conducted in South Asia. This study aimed to evaluate the effectiveness of AVS in the management of primary aldosteronism. Methods: A total of 32 patients who underwent AVS at the National Hospital of Sri Lanka from April 2021 to April 2023 were enrolled. Demographic, clinical and laboratory data were obtained retrospectively. A procedure was considered successful when adequate cannulation of both adrenal veins was demonstrated. Cortisol gradient across the adrenal vein (AV) and the peripheral vein was used to establish the success of venous cannulation. Lateralization was determined by the aldosterone gradient between the two sides. Continuous and categorical variables were summarized with mean, SD, and proportions, respectively. The mean and standard deviation of the contralateral suppression index (CSI) were estimated with an intercept-only Bayesian inference model. Results: Of the 32 patients, the average age was 52.47 +26.14 and 19 (59.4%) were males. Both AVs were successfully cannulated in 12 (37.5%). Among them, lateralization was demonstrated in 11(91.7%), and one was diagnosed as a bilateral disease. There were no total failures. Right AV cannulation was unsuccessful in 18 (56.25%), of which lateralization was demonstrated in 9 (50%), and others were inconclusive. Left AV cannulation was unsuccessful only in 2 (6.25%); one was lateralized, and the other remained inconclusive. The estimated mean of the CSI was 0.33 (89% credible interval 0.11-0.86). Seven patients underwent unilateral adrenalectomy and demonstrated significant improvement in blood pressure during follow-up. Two patients await surgery. Others were treated medically. Conclusions: Despite failure due to procedural difficulties, AVS remained useful in the management of patients with PA. Moreover, the success of the procedure needs experienced hands and advanced equipment to achieve optimal outcomes in PA.

Keywords: adrenal venous sampling, lateralization, contralateral suppression index, primary aldosteronism

Procedia PDF Downloads 59
105 Congenital Heart Defect(CHD) “The Silent Crises”; The Need for New Innovative Ways to Save the Ghanaian Child - A Retrospective Study

Authors: Priscilla Akua Agyapong

Abstract:

Background: In a country of nearly 34 million people, Ghana suffers from rapidly growing pediatric CHD cases and not enough pediatric specialists to attend to the burgeoning needs of these children. Most of the cases are either missed or diagnosed late, resulting in increased mortality. According to the National Cardiothoracic Centre, 1 in every 100,000 births in Ghana has CHD; however, there is limited data on the clinical presentation and its management, one of the many reasons I decided to do this case study coupled with the loss my 2 month old niece to multiple Ventricular Septal Defect 3 years ago due late diagnoses. Method: A retrospective cohort study was performed at the child health clinic of one of Ghana’s public tertiary Institutions using data from their electronic health record (EHR) from February 2021 to April 2022. All suspected or provisionally diagnosed cases were included in the analysis. Results: Records of over 3000 children were reviewed with an approximate male to female ratio of 1:1.53 cases diagnosed during the period of study, most of whom were less than 5 years of age. 25 cases had complete clinical records, with acyanotic septal defects being the most diagnosed. 62.5% of the cases were ventricular septal defects, followed by Patent Ductus Arteriosus (23%) and Atrial Septal Defects (4.5%). Tetralogy of Fallot was the most predominant and complex cyanotic CHD with 10%. Conclusion: The indeterminate coronary anatomy of infants makes it difficult to use only echocardiography and other conventional clinical methods in screening for CHDs. There are rising modernizations and new innovative ways that can be employed in Ghana for early detection, hence preventing the delay of a potential surgical repair. It is, therefore, imperative to create the needed awareness about these “SILENT CRISES” and help save the Ghanaian child’s life.

Keywords: congenital heart defect(CHD), ventricular septal defect(VSD), atrial septal defect(ASD), patent ductus arteriosus(PDA)

Procedia PDF Downloads 85
104 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 75
103 Numerical Analysis of Gas-Particle Mixtures through Pipelines

Authors: G. Judakova, M. Bause

Abstract:

The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.

Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing

Procedia PDF Downloads 308
102 Climate Change Impact Due to Timber Product Imports in the UK

Authors: Juan A. Ferriz-Papi, Allan L. Nantel, Talib E. Butt

Abstract:

Buildings are thought to consume about 50% of the total energy in the UK. The use stage in a building life cycle has the largest energy consumption, although different assessments are showing that the construction can equal several years of maintenance and operations. The selection of materials with lower embodied energy is very important to reduce this consumption. For this reason, timber is one adequate material due to its low embodied energy and the capacity to be used as carbon storage. The use of timber in the construction industry is very significant. Sawn wood, for example, is one of the top 5 construction materials consumed in the UK according to National Statistics. Embodied energy for building products considers the energy consumed in extraction and production stages. However, it is not the same consideration if this product is produced locally as when considering the resource produced further afield. Transport is a very relevant matter that profoundly influences in the results of embodied energy. The case of timber use in the UK is important because the balance between imports and exports is far negative, industry consuming more imported timber than produced. Nearly 80% of sawn softwood used in construction is imported. The imports-exports deficit for sawn wood accounted for more than 180 million pounds during the first four-month period of 2016. More than 85% of these imports come from Europe (83% from the EU). The aim of this study is to analyze climate change impact due to transport for timber products consumed in the UK. An approximate estimation of energy consumed and carbon emissions are calculated considering the timber product’s import origin. The results are compared to the total consumption of each product, estimating the impact of transport on the final embodied energy and carbon emissions. The analysis of these results can help deduce that one big challenge for climate change is the reduction of external dependency, with the associated improvement of internal production of timber products. A study of different types of timber products produced in the UK and abroad is developed to understand the possibilities for this country to improve sustainability and self-management. Reuse and recycle possibilities are also considered.

Keywords: embodied energy, climate change, CO2 emissions, timber, transport

Procedia PDF Downloads 341
101 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 3
100 S. cerevisiae Strains Co-Cultured with Isochrysis Galbana Create Greater Biomass for Biofuel Production than Nannochloropsis sp.

Authors: Madhalasa Iyer

Abstract:

The increase in sustainable practices have encouraged the research and production of alternative fuels. New techniques of bio flocculation with the addition of yeast and bacteria strains have increased the efficiency of biofuel production. Fatty acid methyl ester (FAME) analysis in previous research has indicated that yeast can serve as a plausible enhancer for microalgal lipid production. The research hopes to identify the yeast and microalgae treatment group that produces the largest algae biomass. The mass of the dried algae is used as a proxy for TAG production correlating to the cultivation of biofuels. The study uses a model bioreactor created and built using PVC pipes, 8-port sprinkler system manifold, CO2 aquarium tank, and disposable water bottles to grow the microalgae. Nannochloropsis sp., and Isochrysis galbanawere inoculated separately in experimental group 1 and 2 with no treatments and in experimental groups 3 and 4 with each algaeco-cultured with Saccharomyces cerevisiae in the medium of standard garden stone fertilizer. S. cerevisiae was grown in a petri dish with nutrient agar medium before inoculation. A Secchi stick was used before extraction to collect data for the optical density of the microalgae. The biomass estimator was then used to measure the approximate production of biomass. The microalgae were grown and extracted with a french press to analyze secondary measurements using the dried biomass. The experimental units of Isochrysis galbana treated with the baker’s yeast strains showed an increase in the overall mass of the dried algae. S. cerevisiae proved to be an accurate and helpful addition to the solution to provide for the growth of algae. The increase in productivity of this fuel source legitimizes the possible replacement of non-renewable sources with more promising renewable alternatives. This research furthers the notion that yeast and mutants can be engineered to be employed in efficient biofuel creation.

Keywords: biofuel, co-culture, S. cerevisiae, microalgae, yeast

Procedia PDF Downloads 106
99 Processing and Evaluation of Jute Fiber Reinforced Hybrid Composites

Authors: Mohammad W. Dewan, Jahangir Alam, Khurshida Sharmin

Abstract:

Synthetic fibers (carbon, glass, aramid, etc.) are generally utilized to make composite materials for better mechanical and thermal properties. However, they are expensive and non-biodegradable. In the perspective of Bangladesh, jute fibers are available, inexpensive, and comprising good mechanical properties. The improved properties (i.e., low cost, low density, eco-friendly) of natural fibers have made them a promising reinforcement in hybrid composites without sacrificing mechanical properties. In this study, jute and e-glass fiber reinforced hybrid composite materials are fabricated utilizing hand lay-up followed by a compression molding technique. Room temperature cured two-part epoxy resin is used as a matrix. Approximate 6-7 mm thick composite panels are fabricated utilizing 17 layers of woven glass and jute fibers with different fiber layering sequences- only jute, only glass, glass, and jute alternatively (g/j/g/j---) and 4 glass - 9 jute – 4 glass (4g-9j-4g). The fabricated composite panels are analyzed through fiber volume calculation, tensile test, bending test, and water absorption test. The hybridization of jute and glass fiber results in better tensile, bending, and water absorption properties than only jute fiber-reinforced composites, but inferior properties as compared to only glass fiber reinforced composites. Among different fiber layering sequences, 4g-9j-4g fibers layering sequence resulted in better tensile, bending, and water absorption properties. The effect of chemical treatment on the woven jute fiber and chopped glass microfiber infusion are also investigated in this study. Chemically treated jute fiber and 2 wt. % chopped glass microfiber infused hybrid composite shows about 12% improvements in flexural strength as compared to untreated and no micro-fiber infused hybrid composite panel. However, fiber chemical treatment and micro-filler do not have a significant effect on tensile strength.

Keywords: compression molding, chemical treatment, hybrid composites, mechanical properties

Procedia PDF Downloads 154