Search results for: weighted permutation entropy (WPE)
389 For Single to Multilayer Polyvinylidene Fluoride Based Polymer for Electro-Caloric Cooling
Authors: Nouh Zeggai, Lucas Debrux, Fabien Parrain, Brahim Dkhil, Martino Lobue, Morgan Almanza
Abstract:
Refrigeration and air conditioning are some of the most used energies in our daily life, especially vapor compression refrigeration. Electrocaloric material might appears as an alternative towards solid-state cooling. polyvinylidene fluoride (PVDF) based polymer has shown promising adiabatic temperature change (∆T) and entropy change (∆S). There is practically no limit to the electric field that can be applied, except the one that the material can withstand. However, when working with a large surface as required in a device, the chance to have a defect is larger and can drastically reduce the voltage breakdown, thus reducing the electrocaloric properties. In this work, we propose to study how the characteristic of a single film are transposed when going to multilayer. The laminator and the hot press appear as two interesting processes that have been investigating to achieve a multilayer film. The study is mainly focused on the breakdown field and the adiabatic temperature change, but the phase and crystallinity have also been measured. We process one layer-based PVDF and assemble them to obtain a multilayer. Pressing at hot temperature method and lamination were used for the production of the thin films. The multilayer film shows higher breakdown strength, temperature change, and crystallinity (beta phases) using the hot press technique.Keywords: PVDF-TrFE-CFE, multilayer, electrocaloric effect, hot press, cooling device
Procedia PDF Downloads 170388 Energy and Exergy Analyses of Thin-Layer Drying of Pineapple Slices
Authors: Apolinar Picado, Steve Alfaro, Rafael Gamero
Abstract:
Energy and exergy analyses of thin-layer drying of pineapple slices (Ananas comosus L.) were conducted in a laboratory tunnel dryer. Drying experiments were carried out at three temperatures (100, 115 and 130 °C) and an air velocity of 1.45 m/s. The effects of drying variables on energy utilisation, energy utilisation ratio, exergy loss and exergy efficiency were studied. The enthalpy difference of the gas increased as the inlet gas temperature increase. It is observed that at the 75 minutes of the drying process the outlet gas enthalpy achieves a maximum value that is very close to the inlet value and remains constant until the end of the drying process. This behaviour is due to the reduction of the total enthalpy within the system, or in other words, the reduction of the effective heat transfer from the hot gas flow to the vegetable being dried. Further, the outlet entropy exhibits a significant increase that is not only due to the temperature variation, but also to the increase of water vapour phase contained in the hot gas flow. The maximum value of the exergy efficiency curve corresponds to the maximum value observed within the drying rate curves. This maximum value represents the stage when the available energy is efficiently used in the removal of the moisture within the solid. As the drying rate decreases, the available energy is started to be less employed. The exergetic efficiency was directly dependent on the evaporation flux and since the convective drying is less efficient that other types of dryer, it is likely that the exergetic efficiency has relatively low values.Keywords: efficiency, energy, exergy, thin-layer drying
Procedia PDF Downloads 255387 Modelling of Aerosols in Absorption Column
Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas cleaning processes. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this, aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. The model predicts the droplet size, the droplet internal variable profiles, and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and describes how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation
Procedia PDF Downloads 244386 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 195385 Investigation of Complexity Dynamics in a DC Glow Discharge Magnetized Plasma Using Recurrence Quantification Analysis
Authors: Vramori Mitra, Bornali Sarma, Arun K. Sarma
Abstract:
Recurrence is a ubiquitous feature of any real dynamical system. The states in phase space trajectory of a system have an inherent tendency to return to the same state or its close state after certain time laps. Recurrence quantification analysis technique, based on this fundamental feature of a dynamical system, detects evaluation of state under variation of control parameter of the system. The paper presents the investigation of nonlinear dynamical behavior of plasma floating potential fluctuations obtained by using a Langmuir probe in different magnetic field under the variation of discharge voltages. The main measures of recurrence quantification analysis are considered as determinism, linemax and entropy. The increment of the DET and linemax variables asserts that the predictability and periodicity of the system is increasing. The variable linemax indicates that the chaoticity is being diminished with the slump of magnetic field while increase of magnetic field enhancing the chaotic behavior. Fractal property of the plasma time series estimated by DFA technique (Detrended fluctuation analysis) reflects that long-range correlation of plasma fluctuations is decreasing while fractal dimension is increasing with the enhancement of magnetic field which corroborates the RQA analysis.Keywords: detrended fluctuation analysis, chaos, phase space, recurrence
Procedia PDF Downloads 328384 Bounds on the Laplacian Vertex PI Energy
Authors: Ezgi Kaya, A. Dilek Maden
Abstract:
A topological index is a number related to graph which is invariant under graph isomorphism. In theoretical chemistry, molecular structure descriptors (also called topological indices) are used for modeling physicochemical, pharmacologic, toxicologic, biological and other properties of chemical compounds. Let G be a graph with n vertices and m edges. For a given edge uv, the quantity nu(e) denotes the number of vertices closer to u than v, the quantity nv(e) is defined analogously. The vertex PI index defined as the sum of the nu(e) and nv(e). Here the sum is taken over all edges of G. The energy of a graph is defined as the sum of the eigenvalues of adjacency matrix of G and the Laplacian energy of a graph is defined as the sum of the absolute value of difference of laplacian eigenvalues and average degree of G. In theoretical chemistry, the π-electron energy of a conjugated carbon molecule, computed using the Hückel theory, coincides with the energy. Hence results on graph energy assume special significance. The Laplacian matrix of a graph G weighted by the vertex PI weighting is the Laplacian vertex PI matrix and the Laplacian vertex PI eigenvalues of a connected graph G are the eigenvalues of its Laplacian vertex PI matrix. In this study, Laplacian vertex PI energy of a graph is defined of G. We also give some bounds for the Laplacian vertex PI energy of graphs in terms of vertex PI index, the sum of the squares of entries in the Laplacian vertex PI matrix and the absolute value of the determinant of the Laplacian vertex PI matrix.Keywords: energy, Laplacian energy, laplacian vertex PI eigenvalues, Laplacian vertex PI energy, vertex PI index
Procedia PDF Downloads 245383 Preparation of Activated Carbon from Lignocellulosic Precursor for Dyes Adsorption
Authors: H. Mokaddem, D. Miroud, N. Azouaou, F. Si-Ahmed, Z. Sadaoui
Abstract:
The synthesis and characterization of activated carbon from local lignocellulosic precursor (Algerian alfa) was carried out for the removal of cationic dyes from aqueous solutions. The effect of the production variables such as impregnation chemical agents, impregnation ratio, activation temperature and activation time were investigated. Carbon obtained using the optimum conditions (CaCl2/ 1:1/ 500°C/2H) was characterized by various analytical techniques scanning electron microscopy (SEM), infrared spectroscopic analysis (FTIR) and zero-point-of-charge (pHpzc). Adsorption tests of methylene blue on the optimal activated carbon were conducted. The effects of contact time, amount of adsorbent, initial dye concentration and pH were studied. The adsorption equilibrium examined using Langmuir, Freundlich, Temkin and Redlich–Peterson models reveals that the Langmuir model is most appropriate to describe the adsorption process. The kinetics of MB sorption onto activated carbon follows the pseudo-second order rate expression. The examination of the thermodynamic analysis indicates that the adsorption process is spontaneous (ΔG ° < 0) and endothermic (ΔH ° > 0), the positive value of the standard entropy shows the affinity between the activated carbon and the dye. The present study showed that the produced optimal activated carbon prepared from Algerian alfa is an effective low-cost adsorbent and can be employed as alternative to commercial activated carbon for removal of MB dye from aqueous solution.Keywords: activated carbon, adsorption, cationic dyes, Algerian alfa
Procedia PDF Downloads 229382 A Neural Network Approach to Evaluate Supplier Efficiency in a Supply Chain
Authors: Kishore K. Pochampally
Abstract:
The success of a supply chain heavily relies on the efficiency of the suppliers involved. In this paper, we propose a neural network approach to evaluate the efficiency of a supplier, which is being considered for inclusion in a supply chain, using the available linguistic (fuzzy) data of suppliers that already exist in the supply chain. The approach is carried out in three phases, as follows: In phase one, we identify criteria for evaluation of the supplier of interest. Then, in phase two, we use performance measures of already existing suppliers to construct a neural network that gives weights (importance values) of criteria identified in phase one. Finally, in phase three, we calculate the overall rating of the supplier of interest. The following are the major findings of the research conducted for this paper: (i) linguistic (fuzzy) ratings of suppliers such as 'good', 'bad', etc., can be converted (defuzzified) to numerical ratings (1 – 10 scale) using fuzzy logic so that those ratings can be used for further quantitative analysis; (ii) it is possible to construct and train a multi-level neural network in order to determine the weights of the criteria that are used to evaluate a supplier; and (iii) Borda’s rule can be used to group the weighted ratings and calculate the overall efficiency of the supplier.Keywords: fuzzy data, neural network, supplier, supply chain
Procedia PDF Downloads 114381 Comparison Of Data Mining Models To Predict Future Bridge Conditions
Authors: Pablo Martinez, Emad Mohamed, Osama Mohsen, Yasser Mohamed
Abstract:
Highway and bridge agencies, such as the Ministry of Transportation in Ontario, use the Bridge Condition Index (BCI) which is defined as the weighted condition of all bridge elements to determine the rehabilitation priorities for its bridges. Therefore, accurate forecasting of BCI is essential for bridge rehabilitation budgeting planning. The large amount of data available in regard to bridge conditions for several years dictate utilizing traditional mathematical models as infeasible analysis methods. This research study focuses on investigating different classification models that are developed to predict the bridge condition index in the province of Ontario, Canada based on the publicly available data for 2800 bridges over a period of more than 10 years. The data preparation is a key factor to develop acceptable classification models even with the simplest one, the k-NN model. All the models were tested, compared and statistically validated via cross validation and t-test. A simple k-NN model showed reasonable results (within 0.5% relative error) when predicting the bridge condition in an incoming year.Keywords: asset management, bridge condition index, data mining, forecasting, infrastructure, knowledge discovery in databases, maintenance, predictive models
Procedia PDF Downloads 191380 The Use of Stochastic Gradient Boosting Method for Multi-Model Combination of Rainfall-Runoff Models
Authors: Phanida Phukoetphim, Asaad Y. Shamseldin
Abstract:
In this study, the novel Stochastic Gradient Boosting (SGB) combination method is addressed for producing daily river flows from four different rain-runoff models of Ohinemuri catchment, New Zealand. The selected rainfall-runoff models are two empirical black-box models: linear perturbation model and linear varying gain factor model, two conceptual models: soil moisture accounting and routing model and Nedbør-Afrstrømnings model. In this study, the simple average combination method and the weighted average combination method were used as a benchmark for comparing the results of the novel SGB combination method. The models and combination results are evaluated using statistical and graphical criteria. Overall results of this study show that the use of combination technique can certainly improve the simulated river flows of four selected models for Ohinemuri catchment, New Zealand. The results also indicate that the novel SGB combination method is capable of accurate prediction when used in a combination method of the simulated river flows in New Zealand.Keywords: multi-model combination, rainfall-runoff modeling, stochastic gradient boosting, bioinformatics
Procedia PDF Downloads 339379 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 240378 Effect of Sex and Breed on Live Weight of Adult Iranian Pigeons
Authors: Sepehr Moradi, Mehdi Asadi Rad
Abstract:
This study is to evaluate the live weight of adult pigeons to investigate about their sex, race, their mutual effects and some auxiliary variables in 4 races of Kabood, Tizpar, Parvazy, and Namebar. In this paper, 152 pieces of pigeons as 76 male and female pairs with equal age are studied randomly. Then the birds were weighted by a scale with one gram precision. Software was used for statistical analysis. Mean live weight of adult male and female pigeons in 4 races (Kabood, Tizpar, Parvazy and Namebar with (15, 20, 20, 21) and (20, 21, 18, 17) records were, (530±56, 388.75±32, 392±34, 552±48) and (446±34, 342±32, 341±46, 457±57) gr, respectively. Difference weight of adult live of male with female was significant in 1% level (P < 0.01). Difference live weight of male adult pigeon was significant in 5% level (P < 0.05). Different live weight of female adult pigeon between Kabood, Parvazy and Tizpar races were significant in 5% level (P < 0.05) but mean live weight Kabood race with Namebar race and Parvazy with Tizpar were not significant. The results showed that most and least mean live weights belonged to Namebar of the male pigeon race and Parvazy of the female pigeon race.Keywords: Iranian Native Pigeons, adult weight, live weight, adult pigeons
Procedia PDF Downloads 202377 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.Keywords: apartment complex, big data, life-cycle building value analysis, machine learning
Procedia PDF Downloads 374376 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 123375 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach
Authors: Imen Dhaou
Abstract:
This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization
Procedia PDF Downloads 256374 Determination of Biomolecular Interactions Using Microscale Thermophoresis
Authors: Lynn Lehmann, Dinorah Leyva, Ana Lazic, Stefan Duhr, Philipp Baaske
Abstract:
Characterization of biomolecular interactions, such as protein-protein, protein-nucleic acid or protein-small molecule, provides critical insights into cellular processes and is essential for the development of drug diagnostics and therapeutics. Here we present a novel, label-free, and tether-free technology to analyze picomolar to millimolar affinities of biomolecular interactions by Microscale Thermophoresis (MST). The entropy of the hydration shell surrounding molecules determines thermophoretic movement. MST exploits this principle by measuring interactions using optically generated temperature gradients. MST detects changes in the size, charge and hydration shell of molecules and measures biomolecule interactions under close-to-native conditions: immobilization-free and in bioliquids of choice, including cell lysates and blood serum. Thus, MST measures interactions under close-to-native conditions, and without laborious sample purification. We demonstrate how MST determines the picomolar affinities of antibody::antigen interactions, and protein::protein interactions measured from directly from cell lysates. MST assays are highly adaptable to fit to the diverse requirements of different and complex biomolecules. NanoTemper´s unique technology is ideal for studies requiring flexibility and sensitivity at the experimental scale, making MST suitable for basic research investigations and pharmaceutical applications.Keywords: biochemistry, biophysics, molecular interactions, quantitative techniques
Procedia PDF Downloads 526373 Analysis of Weather Variability Impact on Yields of Some Crops in Southwest, Nigeria
Authors: Olumuyiwa Idowu Ojo, Oluwatobi Peter Olowo
Abstract:
The study developed a Geographical Information Systems (GIS) database and mapped inter-annual changes in crop yields of cassava, cowpea, maize, rice, melon and yam as a response to inter-annual rainfall and temperature variability in Southwest, Nigeria. The aim of this project is to study the comparative analysis of the weather variability impact of six crops yield (Rice, melon, yam, cassava, Maize and cowpea) in South Western States of Nigeria (Oyo, Osun, Ekiti, Ondo, Ogun and Lagos) from 1991 – 2007. The data was imported and analysed in the Arch GIS 9 – 3 software environment. The various parameters (temperature, rainfall, crop yields) were interpolated using the kriging method. The results generated through interpolation were clipped to the study area. Geographically weighted regression was chosen from the spatial statistics toolbox in Arch GIS 9.3 software to analyse and predict the relationship between temperature, rainfall and the different crops (Cowpea, maize, rice, melon, yam, and cassava).Keywords: GIS, crop yields, comparative analysis, temperature, rainfall, weather variability
Procedia PDF Downloads 326372 Comparison of Live Weight of Pure and Mixed Races Tizpar 30-Day Squabs
Authors: Sepehr Moradi, Mehdi Asadi Rad
Abstract:
The aim of this study is to evaluate and compare live weight of pure and mixed races Tizpar 30-day pigeons to investigate about their sex, race, and some auxiliary variables. In this paper, 70 pieces of pigeons as 35 male and female pairs with equal age are studied randomly. A natural incubation was done from each pair. All produced chickens were weighted at 30 days age before and after hunger by a scale with one gram precision. A covariance analysis was used since there were many auxiliary variables and unequal observations. SAS software was used for statistical analysis. Mean weight of live in pure race (Tizpar-Tizpar) with 12 records, 182.3±60.9 gr and mixed races of Tizpar-Kabood, Tizpar-Parvazy, Tizpar-Namebar, Kabood-Tizpar, Namebar-Tizpar, and Parvazy-Tizpar with 10, 10, 8, 6, 12, and 12 records were 114.3±71.6, 210.6±71.7, 353.2±86, 520.8±81.5, 288.3±65.6, and 382.6±70.4 gr, respectively. Effects of sex, race and some auxiliary variables were also significant in 1% level (P < 0.01). Difference live weight of 30-day of Tizpar-Tizpar race with Tizpar-Namebar and Parvazi-Tizpar mixed races was significant in 5% level (P < 0.05) and with Kabood-Tizpar mixed races was significant in 1% level (P < 0.01) but with Tizpar-Kabood, Nmaebar-Tizpar and Tizpar-Parvazy mixed races was not significant. The results showed that most and least weights of live belonged to Kabood-Tizpar and Tizpar-Kabood.Keywords: squabs, Tizpar race, 30-day live weight, pigeons
Procedia PDF Downloads 177371 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network
Authors: Hossain A., Chowdhury S. I.
Abstract:
Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera
Procedia PDF Downloads 103370 Intelligent Staff Scheduling: Optimizing the Solver with Tabu Search
Authors: Yu-Ping Chiu, Dung-Ying Lin
Abstract:
Traditional staff scheduling methods, relying on employee experience, often lead to inefficiencies and resource waste. The challenges of transferring scheduling expertise and adapting to changing labor regulations further complicate this process. Manual approaches become increasingly impractical as companies accumulate complex scheduling rules over time. This study proposes an algorithmic optimization approach to address these issues, aiming to expedite scheduling while ensuring strict compliance with labor regulations and company policies. The method focuses on generating optimal schedules that minimize weighted company objectives within a compressed timeframe. Recognizing the limitations of conventional commercial software in modeling and solving complex real-world scheduling problems efficiently, this research employs Tabu Search with both long-term and short-term memory structures. The study will present numerical results and managerial insights to demonstrate the effectiveness of this approach in achieving intelligent and efficient staff scheduling.Keywords: intelligent memory structures, mixed integer programming, meta-heuristics, staff scheduling problem, tabu search
Procedia PDF Downloads 27369 Moisture Absorption Analysis of LLDPE-NR Nanocomposite for HV Insulation
Authors: M. S. Kamarulzaman, N. A. Muhamad, N. A. M. Jamail, M. A. M. Piah, N. F. Kasri
Abstract:
Insulation for high voltage application that has been service for a very long time is subjected to several types of degradation. The degradation can lead to premature breakdown and definitely will spent highly cost to replace the cable. Thus, there are many research on nano composite material get serious attention attention due to their abilities to enhance electrical performance by addition of nano filler. In this paper, water absorption of Low Linear Density Polyethyelene (LLDPE) with different amount of nano filler added is studied. This study is necessary to be conducted since most of electrical apparatus such as cable insulation are dominant used especially in high voltage application. The cable insulation are continuously exposed in uncontrolled environment may suffer degradation process. Three type of nano fillers, was used in this study are: Silicon dioxide (SiO2), Titanium dioxide (TiO2) and Monmorillonite (MMT). The percentage absorption of water was measured by weighted using high precision scales for absorption process up to 92 days. Experimental result demonstrate that SiO2 absorb less water than other filler while, the MMT has hydrophilic properties which it absorbs more water compare to another sample.Keywords: nano composite, nano filler, water absorption, hydrophilic properties
Procedia PDF Downloads 356368 Road Maintenance Management Decision System Using Multi-Criteria and Geographical Information System for Takoradi Roads, Ghana
Authors: Eric Mensah, Carlos Mensah
Abstract:
The road maintenance backlogs created as a result of deferred maintenance especially in developing countries has caused considerable deterioration of many road assets. This is usually due to difficulties encountered in selecting and prioritising maintainable roads based on objective criteria rather than some political or other less important criteria. In order to ensure judicious use of limited resources for road maintenance, five factors were identified as the most important criteria for road management within the study area. This was based on the judgements of 40 experts. The results were further used to develop weightings using the Multi-Criteria Decision Process (MCDP) to analyse and select road alternatives according to maintenance goal. Using Geographical Information Systems (GIS), maintainable roads were grouped using the Jenk’s natural breaks to allow for further prioritised in order of importance for display on a dashboard of maps, charts, and tables. This reduces the problems of subjective maintenance and road selections, thereby reducing wastage of resources and easing the maintenance process through an object organised spatial decision support system.Keywords: decision support, geographical information systems, multi-criteria decision process, weighted sum
Procedia PDF Downloads 377367 Developing Fire Risk Factors for Existing Small-Scale Hospitals
Authors: C. L. Wu, W. W. Tseng
Abstract:
From the National Health Insurance (NHI) system was introduced in Taiwan in 2000, there have been some problems in transformed small-scale hospitals, such as mobility of patients, shortage of nursing staff, medical pipelines breaking fire compartments and insufficient fire protection systems. Due to shrinking of the funding scale and the aging society, fire safety in small-scale hospitals has recently given cause for concern. The aim of this study is to determine fire risk index for small-scale hospital through a systematic approach The selection of fire safety mitigation methods can be regarded as a multi-attribute decision making process which must be guaranteed by expert groups. First of all, identify and select safety related factors and identify evaluation criteria through literature reviews and experts group. Secondly, application of the Fuzzy Analytic Hierarchy Process method is used to ascertain a weighted value which enables rating of the importance each of the selected factors. Overall, Sprinkler type and Compartmentation are the most crucial indices in mitigating fire, that is to say, structural approach play an important role to decrease losses in fire events.Keywords: Fuzzy Delphi Method, fuzzy analytic hierarchy, process risk assessment, fire events
Procedia PDF Downloads 447366 Thermal Instability in Rivlin-Ericksen Elastico-Viscous Nanofluid with Connective Boundary Condition: Effect of Vertical Throughflow
Authors: Shivani Saini
Abstract:
The effect of vertical throughflow on the onset of convection in Rivlin-Ericksen Elastico-Viscous nanofluid with convective boundary condition is investigated. The flow is stimulated with modified Darcy model under the assumption that the nanoparticle volume fraction is not actively managed on the boundaries. The heat conservation equation is formulated by introducing the convective term of nanoparticle flux. A linear stability analysis based upon normal mode is performed, and an approximate solution of eigenvalue problems is obtained using the Galerkin weighted residual method. Investigation of the dependence of the Rayleigh number on various viscous and nanofluid parameter is performed. It is found that through flow and nanofluid parameters hasten the convection while capacity ratio, kinematics viscoelasticity, and Vadasz number do not govern the stationary convection. Using the convective component of nanoparticle flux, critical wave number is the function of nanofluid parameters as well as the throughflow parameter. The obtained solution provides important physical insight into the behavior of this model.Keywords: Darcy model, nanofluid, porous layer, throughflow
Procedia PDF Downloads 137365 A Weighted Group EI Incorporating Role Information for More Representative Group EI Measurement
Authors: Siyu Wang, Anthony Ward
Abstract:
Emotional intelligence (EI) is a well-established personal characteristic. It has been viewed as a critical factor which can influence an individual's academic achievement, ability to work and potential to succeed. When working in a group, EI is fundamentally connected to the group members' interaction and ability to work as a team. The ability of a group member to intelligently perceive and understand own emotions (Intrapersonal EI), to intelligently perceive and understand other members' emotions (Interpersonal EI), and to intelligently perceive and understand emotions between different groups (Cross-boundary EI) can be considered as Group emotional intelligence (Group EI). In this research, a more representative Group EI measurement approach, which incorporates the information of the composition of a group and an individual’s role in that group, is proposed. To demonstrate the claim of being more representative Group EI measurement approach, this study adopts a multi-method research design, involving a combination of both qualitative and quantitative techniques to establish a metric of Group EI. From the results, it can be concluded that by introducing the weight coefficient of each group member on group work into the measurement of Group EI, Group EI will be more representative and more capable of understanding what happens during teamwork than previous approaches.Keywords: case study, emotional intelligence, group EI, multi-method research
Procedia PDF Downloads 126364 Exergy Based Analysis of Parabolic Trough Collector Using Twisted-Tape Inserts
Authors: Atwari Rawani, Suresh Prasad Sharma, K. D. P. Singh
Abstract:
In this paper, an analytical investigation based on energy and exergy analysis of the parabolic trough collector (PTC) with alternate clockwise and counter-clockwise twisted tape inserts in the absorber tube has been presented. For fully developed flow under quasi-steady state conditions, energy equations have been developed in order to analyze the rise in fluid temperature, thermal efficiency, entropy generation and exergy efficiency. Also the effect of system and operating parameters on performance have been studied. A computer program, based on mathematical models is developed in C++ language to estimate the temperature rise of fluid for evaluation of performances under specified conditions. For numerical simulations four different twist ratio, x = 2,3,4,5 and mass flow rate 0.06 kg/s to 0.16 kg/s which cover the Reynolds number range of 3000 - 9000 is considered. This study shows that twisted tape inserts when used shows great promise for enhancing the performance of PTC. Results show that for x=1, Nusselt number/heat transfer coefficient is found to be 3.528 and 3.008 times over plain absorber of PTC at mass flow rate of 0.06 kg/s and 0.16 kg/s respectively; while corresponding enhancement in thermal efficiency is 12.57% and 5.065% respectively. Also the exergy efficiency has been found to be 10.61% and 10.97% and enhancement factor is 1.135 and 1.048 for same set of conditions.Keywords: exergy efficiency, twisted tape ratio, turbulent flow, useful heat gain
Procedia PDF Downloads 174363 Harnessing Microorganism Having Potential for Biotreatment of Wastewater
Authors: Haruna Saidu, Sulaiman Mohammed, Abdulkarim Ali Deba, Shaza Eva Mohamad
Abstract:
Determining the diversity of the indigenous microorganisms in Palm Oil Mill Effluent (POME) could allow their wider application for the treatment of recalcitrant agro-based wastewater discharge into the environment. Many research studies mainly determined the efficiency of microorganism or their co-cultivation with microalgae for enhanced treatment of wastewater, suggesting a limited emphasis on the application of microbial diversity. In this study, the microorganism was cultured in POME for a period of 15 days using microalgae as a source of carbon. Pyrosequencing analysis reveals a diversity of microbial community in 20% (v/v) culture than the control experiment. Most of the bacterial species identified in POME belong to the families of Bacillaceae, Paenibacillaceae, Enterococcaceae, Clostridiaceae, Peptostreptococcaceae, Caulobacteraceae, Enterobacteriaceae, Moraxellaceae, and Pseudomonadaceae. Alpha (α) diversity analysis reveals the high composition of the microbial community of 52 in both samples. Beta (β) diversity index indicated the occurrence of similar species of microorganisms in unweighted uni fra than the weighted uni fra of both samples. It is therefore suggested that bacteria found in these families could have a potential for synergistic treatment of high-strength wastewater generated from the palm oil industry.Keywords: diversity, microorganism, wastewater, pyrosequencing, palm oil mill effluent
Procedia PDF Downloads 39362 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty
Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus
Abstract:
Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming
Procedia PDF Downloads 179361 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements
Authors: Sabiu Bala Muhammad, Rosli Saad
Abstract:
Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity
Procedia PDF Downloads 276360 Exergy Analysis of Poultry Litter-to-Energy Production by the Advanced Combustion System
Authors: Samuel Oludayo Alamu, Seong Lee
Abstract:
The need for generating energy from biomass in an efficient way as well as maximizing the yield of total energy from the thermal conversion process has been a major concern for researchers. A holistic approach which involves the combination of First law of thermodynamics (FLT) and the second law of thermodynamics (SLT) is required for conducting an effective assessment of an energy plant since FLT analysis alone fails to identify the quality of the dissipated energy and how much work potential is available. The overall purpose of this study is to investigate the exergy analysis of direct combustion of poultry waste being converted to energy with a handful of environmental assessment of the conversion processes in order to maximize thermal efficiency. The exergy analysis around the shell and tube heat exchanger (STHE) was investigated primarily by varying the operating parameters for different tube shapes and flow direction, and an exergy model was obtained from estimations of the higher heating value and standard entropy of poultry waste from the elemental compositions. The STHE was designed and fabricated by Lee Research Group at Morgan State University. The analysis conducted on theSTHE using the flue gas temperature entering and exiting show that only about one-third of the energy input to the STHE was available to do work with an overall efficiency of 13.8%, while a huge amount was lost to the surrounding. By recirculating the flue gas, the exergy efficiency of the combustion system can be maximized with a greater reduction in the amount of exergy loss.Keywords: exergy analysis, shell and tube heat exchanger, thermodynamics, combustion system, thermal efficiency
Procedia PDF Downloads 109