Search results for: quadratic discriminant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 337

Search results for: quadratic discriminant

157 Parasitological Study and Its Role in Fisheries Management and Stock Assessment of Boops boops (Lineauses, 1758) along the Tunisian Coast

Authors: I. Chebbi, L. Boudaya, L. Neifar

Abstract:

The bogue, Boops boops is an economically important fishery resource and commonly captured in the Mediterranean, and its diversity in parasites has been used as a tool to differentiate between stocks along with Tunisia since it is widely acceptable in fisheries management. In this study, a total of 90 fish are investigated from three localities off Tunisia, including Kelibia, Mahdia, and Zarzis. Fifteen species of parasites totaling 1270 individuals were harvested from B. boops, whereas ten parasites were used as biological tags. Based on Mahalanobis distance, each parasite species shows a great importance in the discrimination between groups. Tetraphyllidea larvae are the most influential parasites in determining the position of samples belonging to Kelibia. Monogenean species and Hysterothylacium sp. are the most important species for determining the position of samples from Mahdia. Specimens from Zarzis are characterized by the absence of the four Monogenean species and the Tetraphyllidea larvae. Parasites allocate B. boops population correctly to their origin communities with an accuracy of 83.3%. These results were corroborated by the discriminant analyses, highlighted the presence of three stocks, and improved that the parasitological method can be considered as a reliable key to provide imperative information for discriminating among B. boops stocks in Tunisian waters.

Keywords: biological marker, Boops boops, parasite, population structure

Procedia PDF Downloads 104
156 Predicting Relative Performance of Sector Exchange Traded Funds Using Machine Learning

Authors: Jun Wang, Ge Zhang

Abstract:

Machine learning has been used in many areas today. It thrives at reviewing large volumes of data and identifying patterns and trends that might not be apparent to a human. Given the huge potential benefit and the amount of data available in the financial market, it is not surprising to see machine learning applied to various financial products. While future prices of financial securities are extremely difficult to forecast, we study them from a different angle. Instead of trying to forecast future prices, we apply machine learning algorithms to predict the direction of future price movement, in particular, whether a sector Exchange Traded Fund (ETF) would outperform or underperform the market in the next week or in the next month. We apply several machine learning algorithms for this prediction. The algorithms are Linear Discriminant Analysis (LDA), k-Nearest Neighbors (KNN), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Neural Networks (NN). We show that these machine learning algorithms, most notably GNB and NN, have some predictive power in forecasting out-performance and under-performance out of sample. We also try to explore whether it is possible to utilize the predictions from these algorithms to outperform the buy-and-hold strategy of the S&P 500 index. The trading strategy to explore out-performance predictions does not perform very well, but the trading strategy to explore under-performance predictions can earn higher returns than simply holding the S&P 500 index out of sample.

Keywords: machine learning, ETF prediction, dynamic trading, asset allocation

Procedia PDF Downloads 59
155 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes

Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia

Abstract:

In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.

Keywords: basic modal displacements, earthquake, optimization, spectrum

Procedia PDF Downloads 335
154 Optimization of Effecting Parameters for the Removal of H₂S Gas in Self Priming Venturi Scrubber Using Response Surface Methodology

Authors: Manisha Bal, B. C. Meikap

Abstract:

Highly toxic and corrosive gas H₂S is recognized as one of the hazardous air pollutants which has significant effect on the human health. Abatement of H₂S gas from the air is very necessary. H₂S gas is mainly released from the industries like paper and leather industry as well as during the production of crude oil, during wastewater treatment, etc. But the emission of H₂S gas in high concentration may cause immediate death while at lower concentrations can cause various respiratory problems. In the present study, self priming venturi scrubber is used to remove the H₂S gas from the air. Response surface methodology with central composite design has been chosen to observe the effect of process parameters on the removal efficiency of H₂S. Experiments were conducted by varying the throat gas velocity, liquid level in outer cylinder, and inlet H₂S concentration. ANOVA test confirmed the significant effect of parameters on the removal efficiency. A quadratic equation has been obtained which predicts the removal efficiency very well. The suitability of the developed model has been judged by the higher R² square value which obtained from the regression analysis. From the investigation, it was found that the throat gas velocity has most significant effect and inlet concentration of H₂S has less effect on H₂S removal efficiency.

Keywords: desulfurization, pollution control, response surface methodology, venturi scrubber

Procedia PDF Downloads 107
153 Modelling and Optimization of Laser Cutting Operations

Authors: Hany Mohamed Abdu, Mohamed Hassan Gadallah, El-Giushi Mokhtar, Yehia Mahmoud Ismail

Abstract:

Laser beam cutting is one nontraditional machining process. This paper optimizes the parameters of Laser beam cutting machining parameters of Stainless steel (316L) by considering the effect of input parameters viz. power, oxygen pressure, frequency and cutting speed. Statistical design of experiments are carried in three different levels and process responses such as 'Average kerf taper (Ta)' and 'Surface Roughness (Ra)' are measured accordingly. A quadratic mathematical model (RSM) for each of the responses is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27 OA) are employed to search for an optimal parametric combination to achieve desired yield of the process. RSM models are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA) using MATLAB environment. Optimum solutions are compared with Taguchi Methodology results.

Keywords: optimization, laser cutting, robust design, kerf width, Taguchi method, RSM and DOE

Procedia PDF Downloads 581
152 Hydraulic Resources Management under Imperfect Competition with Thermal Plants in the Wholesale Electricity Market

Authors: Abdessalem Abbassi, Ahlem Dakhlaoui, Lota D. Tamini

Abstract:

In this paper, we analyze infinite discrete-time games between hydraulic and thermal power operators in the wholesale electricity market under Cournot competition. We consider a deregulated electrical industry where certain demand is satisfied by hydraulic and thermal technologies. The hydraulic operator decides the production in each season of each period that maximizes the sum of expected profits from power generation with respect to the stochastic dynamic constraint on the water stored in the dam, the environmental constraint and the non-negative output constraint. In contrast, the thermal plant is operated with quadratic cost function, with respect to the capacity production constraint and the non-negativity output constraint. We show that under imperfect competition, the hydraulic operator has a strategic storage of water in the peak season. Then, we quantify the strategic inter-annual and intra-annual water transfer and compare the numerical results. Finally, we show that the thermal operator can restrict the hydraulic output without compensation.

Keywords: asymmetric risk aversion, electricity wholesale market, hydropower dams, imperfect competition

Procedia PDF Downloads 327
151 Modeling and Optimization of Algae Oil Extraction Using Response Surface Methodology

Authors: I. F. Ejim, F. L. Kamen

Abstract:

Aims: In this experiment, algae oil extraction with a combination of n-hexane and ethanol was investigated. The effects of extraction solvent concentration, extraction time and temperature on the yield and quality of oil were studied using Response Surface Methodology (RSM). Experimental Design: Optimization of algae oil extraction using Box-Behnken design was used to generate 17 experimental runs in a three-factor-three-level design where oil yield, specific gravity, acid value and saponification value were evaluated as the response. Result: In this result, a minimum oil yield of 17% and maximum of 44% was realized. The optimum values for yield, specific gravity, acid value and saponification value from the overlay plot were 40.79%, 0.8788, 0.5056 mg KOH/g and 180.78 mg KOH/g respectively with desirability of 0.801. The maximum point prediction was yield 40.79% at solvent concentration 66.68 n-hexane, temperature of 40.0°C and extraction time of 4 hrs. Analysis of Variance (ANOVA) results showed that the linear and quadratic coefficient were all significant at p<0.05. The experiment was validated and results obtained were with the predicted values. Conclusion: Algae oil extraction was successfully optimized using RSM and its quality indicated it is suitable for many industrial uses.

Keywords: algae oil, response surface methodology, optimization, Box-Bohnken, extraction

Procedia PDF Downloads 303
150 Statistical Optimization of Distribution Coefficient for Reactive Extraction of Lactic Acid Using Tri-n-octyl Amine in Oleyl Alcohol and n-Hexane

Authors: Avinash Thakur, Parmjit S. Panesar, Manohar Singh

Abstract:

The distribution coefficient, KD for the reactive extraction of lactic acid from aqueous solutions of lactic acid using 10-30% (v/v) tri-n-octyl amine (extractant) dissolved in n-hexane (inert diluent) and 20% (v/v) oleyl alcohol (modifier) was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined interactive effect of seven independent variables, viz lactic acid concentration (cl), pH, TOA concentration in organic phase (ψ), treat ratio (φ), temperature (T), agitation speed (ω) and batch agitation time (τ) on distribution coefficient of lactic acid. The regression analysis recommended that the quadratic model is significant (R2 and adjusted R2 are 98.72 % and 98.69 % respectively) for analysis. A numerical optimization had resulted in maximum lactic acid distribution coefficient (KD) of 3.16 at the optimized values for test variables, cl, pH, ψ, φ, T, ω and τ as 0.15 [M], 3.0, 22.75% (v/v), 1.0 (v/v), 26°C, 145 rpm and 23 min respectively. A good agreement between the predicted and experimentally obtained values for distribution coefficient using the optimized conditions was exhibited.

Keywords: Distribution coefficient, tri-n-octylamine, lactic acid, response surface methodology

Procedia PDF Downloads 417
149 Optimum Design of Alkali Activated Slag Concretes for Low Chloride Ion Permeability and Water Absorption Capacity

Authors: Müzeyyen Balçikanli, Erdoğan Özbay, Hakan Tacettin Türker, Okan Karahan, Cengiz Duran Atiş

Abstract:

In this research, effect of curing time (TC), curing temperature (CT), sodium concentration (SC) and silicate modules (SM) on the compressive strength, chloride ion permeability, and water absorption capacity of alkali activated slag (AAS) concretes were investigated. For maximization of compressive strength while for minimization of chloride ion permeability and water absorption capacity of AAS concretes, best possible combination of CT, CTime, SC and SM were determined. An experimental program was conducted by using the central composite design method. Alkali solution-slag ratio was kept constant at 0.53 in all mixture. The effects of the independent parameters were characterized and analyzed by using statistically significant quadratic regression models on the measured properties (dependent parameters). The proposed regression models are valid for AAS concretes with the SC from 0.1% to 7.5%, SM from 0.4 to 3.2, CT from 20 °C to 94 °C and TC from 1.2 hours to 25 hours. The results of test and analysis indicate that the most effective parameter for the compressive strength, chloride ion permeability and water absorption capacity is the sodium concentration.

Keywords: alkali activation, slag, rapid chloride permeability, water absorption capacity

Procedia PDF Downloads 285
148 Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem

Authors: Taisir Eldos, Aws Kanan, Waleed Nazih, Ahmad Khatatbih

Abstract:

Chemical Reaction Optimization (CRO) is an optimization metaheuristic inspired by the nature of chemical reactions as a natural process of transforming the substances from unstable to stable states. Starting with some unstable molecules with excessive energy, a sequence of interactions takes the set to a state of minimum energy. Researchers reported successful application of the algorithm in solving some engineering problems, like the quadratic assignment problem, with superior performance when compared with other optimization algorithms. We adapted this optimization algorithm to the Printed Circuit Board Drilling Problem (PCBDP) towards reducing the drilling time and hence improving the PCB manufacturing throughput. Although the PCBDP can be viewed as instance of the popular Traveling Salesman Problem (TSP), it has some characteristics that would require special attention to the transactions that explore the solution landscape. Experimental test results using the standard CROToolBox are not promising for practically sized problems, while it could find optimal solutions for artificial problems and small benchmarks as a proof of concept.

Keywords: evolutionary algorithms, chemical reaction optimization, traveling salesman, board drilling

Procedia PDF Downloads 487
147 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 355
146 Optimization of Process Parameters using Response Surface Methodology for the Removal of Zinc(II) by Solvent Extraction

Authors: B. Guezzen, M.A. Didi, B. Medjahed

Abstract:

A factorial design of experiments and a response surface methodology were implemented to investigate the liquid-liquid extraction process of zinc (II) from acetate medium using the 1-Butyl-imidazolium di(2-ethylhexyl) phosphate [BIm+][D2EHP-]. The optimization process of extraction parameters such as the initial pH effect (2.5, 4.5, and 6.6), ionic liquid concentration (1, 5.5, and 10 mM) and salt effect (0.01, 5, and 10 mM) was carried out using a three-level full factorial design (33). The results of the factorial design demonstrate that all these factors are statistically significant, including the square effects of pH and ionic liquid concentration. The results showed that the order of significance: IL concentration > salt effect > initial pH. Analysis of variance (ANOVA) showing high coefficient of determination (R2 = 0.91) and low probability values (P < 0.05) signifies the validity of the predicted second-order quadratic model for Zn (II) extraction. The optimum conditions for the extraction of zinc (II) at the constant temperature (20 °C), initial Zn (II) concentration (1mM) and A/O ratio of unity were: initial pH (4.8), extractant concentration (9.9 mM), and NaCl concentration (8.2 mM). At the optimized condition, the metal ion could be quantitatively extracted.

Keywords: ionic liquid, response surface methodology, solvent extraction, zinc acetate

Procedia PDF Downloads 344
145 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity

Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi

Abstract:

The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.

Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain

Procedia PDF Downloads 62
144 Tourist Cultural Literacy: Scale Development and Validation

Authors: Yun-Ru Tsai, Jo-Hui Lin

Abstract:

The cultural interactions between tourists and destination communities have received increased attention. Tourists play an important role in constructing a rewarding intercultural experience and cultural understanding. Cultural literacy is the ability for tourists to negotiate different cultures, this research aimed to develop a measurement of Tourist Cultural Literacy (TCL), the result provides a theoretical framework to assess how tourists interact with different cultural destinations. A pilot qualitative research was conducted in order to generate the initial items. In this study, the procedure of developing the TCL scale was divided into two parts. First, an exploratory factor analysis was conducted, a 25-item TCL scale was developed and six factors were identified: cultural sensitivity, appreciation of the culture, respect for the culture, knowledge of the culture, participate in the culture, and empathy for the culture. Second, confirmatory factor analyses and structural equation modeling were employed, the six-factor model was verified, and was proven to have good fit, reliability, convergent validity, discriminant validity, and criterion-related validity. The study provides managerial implications for tourist management and education, the popularization of TCL might increase the respect and understanding between tourists and local societies as well as decrease the cultural shocks and negative social-cultural impacts derived from tourism activities, thereby reducing the maintenance cost of management and allowing tourists to obtain a better cultural experience. Future research suggestions are also provided.

Keywords: cultural literacy, cultural tourism, scale development, tourism contact

Procedia PDF Downloads 325
143 Coupling Time-Domain Analysis for Dynamic Positioning during S-Lay Installation

Authors: Sun Li-Ping, Zhu Jian-Xun, Liu Sheng-Nan

Abstract:

In order to study the performance of dynamic positioning system during S-lay operations, dynamic positioning system is simulated with the hull-stinger-pipe coupling effect. The roller of stinger is simulated by the generalized elastic contact theory. The stinger is composed of Morrison members. Force on pipe is calculated by lumped mass method. Time domain of fully coupled barge model is analyzed combining with PID controller, Kalman filter and allocation of thrust using Sequential Quadratic Programming method. It is also analyzed that the effect of hull wave frequency motion on pipe-stinger coupling force and dynamic positioning system. Besides, it is studied that how S-lay operations affect the dynamic positioning accuracy. The simulation results are proved to be available by checking pipe stress with API criterion. The effect of heave and yaw motion cannot be ignored on hull-stinger-pipe coupling force and dynamic positioning system. It is important to decrease the barge’s pitch motion and lay pipe in head sea in order to improve safety of the S-lay installation and dynamic positioning.

Keywords: S-lay operation, dynamic positioning, coupling motion, time domain, allocation of thrust

Procedia PDF Downloads 429
142 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping

Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung

Abstract:

Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.

Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 230
141 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence

Procedia PDF Downloads 519
140 Development and Validation of Sense of Humor Questionnaire in China

Authors: Yunshi Peng, Shanshan Gao, Sang Qin

Abstract:

The sense of humor is an integration of cognition, emotion and behavioral tendencies in the process of expressing humor. Previous studies evidenced the positive impact of sense of humor on promoting mental health. However, very few studies investigated this with Chinese populations. The absence of a validated questionnaire limits empirical research on sense of humor in China. This study aimed to develop a Chinese instrument to examine the sense of humor among college students in China. A pool of 72 items was developed by conducting a series of qualitative methods including open-ended questionnaire, individual interviews and literature analysis, followed by an expert rating. A total of 500 college students were recruited from 7 provinces in China to complete all 72 items. The factor structure of sense of humor was established and 25 items were eventually formed by utilizing the exploratory factor analyses (EFA). The questionnaire composed 4 subscales: humor comprehension, humor creativity, attitudes towards humor and optimism level. Confirmatory factor analyses (CFA) from a follow-up study with a different sample of 1200 colleges students showed good model fit. All subscales and the overall questionnaire display satisfying internal consistency. Correlations with criterion variables demonstrated good convergent and discriminant validity. The sense of humor questionnaire is a psychometrically-sound instrument for the population of college students in China. This is applicable for future studies to identify the structure of sense of humor and evaluate the levels of humor for individuals.

Keywords: college students, EFA and CFA, questionnaire, sense of humor

Procedia PDF Downloads 311
139 Thermal End Effect on the Isotachophoretic Separation of Analytes

Authors: Partha P. Gopmandal, S. Bhattacharyya

Abstract:

We investigate the thermal end effect on the pseudo-steady state behavior of the isotachophoretic transport of ionic species in a 2-D microchannel. Both ends of the channel are kept at a constant temperature which may lead to significant changes in electrophoretic migration speed. A mathematical model based on Nernst-Planck equations for transport of ions coupled with the equation for temperature field is considered. In addition, the charge conservation equations govern the potential field due to the external electric field. We have computed the equations for ion transport, potential and temperature in a coupled manner through the finite volume method. The diffusive terms are discretized via central difference scheme, while QUICK (Quadratic Upwind Interpolation Convection Kinematics) scheme is used to discretize the convective terms. We find that the thermal end effect has significant effect on the isotachophoretic (ITP) migration speed of the analyte. Our result shows that the ITP velocity for temperature dependent case no longer varies linearly with the applied electric field. A detailed analysis has been made to provide a range of the key parameters to minimize the Joule heating effect on ITP transport of analytes.

Keywords: finite volume method, isotachophoresis, QUICK scheme, thermal effect

Procedia PDF Downloads 247
138 Coping Heat Stress By Crushed Fennel (Foeniculum Vulgare) Seeds in Broilers: Growth, Redox Balance, and Humoral Immune Response

Authors: Adia Fatima, Naila Chand, Rifat Ullah Khan

Abstract:

The goal of this study was to determine how fennel seed supplementation affected broiler growth, carcass quality, antioxidant status, and antibody titer in heat-stressed broilers. A total of 720 one-day-old broiler chickens were weighed and assigned to 28-floor pens (25 broiler chickens per pen). The broiler chickens were housed in a thermoneutral (TN) environment and were exposed to heat stress (HS). For 23 hours, the broiler chickens were kept under fluorescent lighting. For 35d, HS broiler chickens were fed a control diet and three levels of fennel seeds powder at rates of 15g/kg (Fen-15), 20 g/kg (Fen-20), and 25 g/kg (Fen-25). Overall feed intake, weight gain, and dressing % were considerably greater (P < 0.05) in Fen-25 and TN, but FCR was significantly reduced (P<0.01) in the same groups. When TN, Fen-20, and Fen-25 were compared to the control, malondialdehyde (MDA), paraoxonase (PON1), and antibody titer against New Castle disease (ND) were considerably (P < 0.05) greater. Further, the linear and quadratic response was for feed intake, weight gain, FCR, MDA, PON1, and ND titer. It was concluded that Fen-20 and Fen-25 increased broiler growth, carcass quality, antioxidant status, and immunological response under HS conditions.

Keywords: heat stress, growth, antioxidant, immunity

Procedia PDF Downloads 67
137 A Study on Golden Ratio (ф) and Its Implications on Seismic Design Using ETABS

Authors: Vishal A. S. Salelkar, Sumitra S. Kandolkar

Abstract:

Golden ratio (ф) or Golden mean or Golden section, as it is often referred to, is a proportion or a mean, which is often used by architects while conceiving the aesthetics of a structure. Golden Ratio (ф) is an irrational number that can be roughly rounded to 1.618 and is derived out of quadratic equation x2-x-1=0. The use of Golden Ratio (ф) can be observed throughout history, as far as ancient Egyptians, which later peaked during the Greek golden age. The use of this design technique is very much prevalent. At present, architects around the world prefer this as one of the primary techniques to decide aesthetics. In this study, an analysis has been performed to investigate whether the use of the golden ratio while planning a structure has any effects on the seismic behavior of the structure. The structure is modeled and analyzed on ETABS (by Computers and Structures, Inc.) for Seismic requirements equivalent to Zone III (Region: Goa-India) as per Indian Standard Code IS-1893. The results were compared to that of an identical structure modeled along the lines of normal design philosophy, not using the Golden Ratio tools. The results were then compared for Story Shear, Story Drift, and Story Displacement Readings. Improvement in performance, although slight, but was observed. Similar improvements were also observed in subsequent iterations, performed using time-acceleration data of previous major earthquakes matched to Zone III as per IS-1893.

Keywords: ETABS, golden ratio, seismic design, structural behavior

Procedia PDF Downloads 137
136 In silico Repopulation Model of Various Tumour Cells during Treatment Breaks in Head and Neck Cancer Radiotherapy

Authors: Loredana G. Marcu, David Marcu, Sanda M. Filip

Abstract:

Advanced head and neck cancers are aggressive tumours, which require aggressive treatment. Treatment efficiency is often hindered by cancer cell repopulation during radiotherapy, which is due to various mechanisms triggered by the loss of tumour cells and involves both stem and differentiated cells. The aim of the current paper is to present in silico simulations of radiotherapy schedules on a virtual head and neck tumour grown with biologically realistic kinetic parameters. Using the linear quadratic formalism of cell survival after radiotherapy, altered fractionation schedules employing various treatment breaks for normal tissue recovery are simulated and repopulation mechanism implemented in order to evaluate the impact of various cancer cell contribution on tumour behaviour during irradiation. The model has shown that the timing of treatment breaks is an important factor influencing tumour control in rapidly proliferating tissues such as squamous cell carcinomas of the head and neck. Furthermore, not only stem cells but also differentiated cells, via the mechanism of abortive division, can contribute to malignant cell repopulation during treatment.

Keywords: radiation, tumour repopulation, squamous cell carcinoma, stem cell

Procedia PDF Downloads 247
135 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 424
134 Quadratic Convective Flow of a Micropolar Fluid in a Non-Darcy Porous Medium with Convective Boundary Condition

Authors: Ch. Ramreddy, P. Naveen, D. Srinivasacharya

Abstract:

The objective of the present study is to investigate the effect of nonlinear temperature and concentration on the mixed convective flow of micropolar fluid over an inclined flat plate in a non-Darcy porous medium in the presence of convective boundary condition. In order to analyze all the essential features, the transformed nonlinear conservation equations are worked out numerically by spectral method. By insisting the comparison between vertical, horizontal and inclined plates, the physical quantities of the flow and its characteristics are exhibited graphically and quantitatively with various parameters. An increase in the coupling number and inclination of angle tend to decrease the skin friction, mass transfer rate and the reverse change is there in wall couple stress and heat transfer rate. The nominal effect on the wall couple stress and skin friction is encountered whereas the significant effect on the local heat and mass transfer rates are found for high enough values of Biot number.

Keywords: convective boundary condition, micropolar fluid, non-darcy porous medium, non-linear convection, spectral method

Procedia PDF Downloads 251
133 Detection of Internal Mold Infection of Intact Tomatoes by Non-Destructive, Transmittance VIS-NIR Spectroscopy

Authors: K. Petcharaporn

Abstract:

The external characteristics of tomatoes, such as freshness, color and size are typically used in quality control processes for tomatoes sorting. However, the internal mold infection of intact tomato cannot be sorted based on a visible assessment and destructive method alone. In this study, a non-destructive technique was used to predict the internal mold infection of intact tomatoes by using transmittance visible and near infrared (VIS-NIR) spectroscopy. Spectra for 200 samples contained 100 samples for normal tomatoes and 100 samples for mold infected tomatoes were acquired in the wavelength range between 665-955 nm. This data was used in conjunction with partial least squares-discriminant analysis (PLS-DA) method to generate a classification model for tomato quality between groups of internal mold infection of intact tomato samples. For this task, the data was split into two groups, 140 samples were used for a training set and 60 samples were used for a test set. The spectra of both normal and internally mold infected tomatoes showed different features in the visible wavelength range. Combined spectral pretreatments of standard normal variate transformation (SNV) and smoothing (Savitzky-Golay) gave the optimal calibration model in training set, 85.0% (63 out of 71 for the normal samples and 56 out of 69 for the internal mold samples). The classification accuracy of the best model on the test set was 91.7% (29 out of 29 for the normal samples and 26 out of 31 for the internal mold tomato samples). The results from this experiment showed that transmittance VIS-NIR spectroscopy can be used as a non-destructive technique to predict the internal mold infection of intact tomatoes.

Keywords: tomato, mold, quality, prediction, transmittance

Procedia PDF Downloads 340
132 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design

Authors: Vahid Nademi

Abstract:

In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.

Keywords: blood glucose monitoring, insulin pump, predictive control, optimization

Procedia PDF Downloads 111
131 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: bancassurance, business model, non life bancassurance, insurance business value drivers

Procedia PDF Downloads 270
130 Exploration of Artificial Neural Network and Response Surface Methodology in Removal of Industrial Effluents

Authors: Rakesh Namdeti

Abstract:

Toxic dyes found in industrial effluent must be treated before being disposed of due to their harmful impact on human health and aquatic life. Thus, Musa acuminata (Banana Leaves) was employed in the role of a biosorbent in this work to get rid of methylene blue derived from a synthetic solution. The effects of five process parameters, such as temperature, pH, biosorbent dosage, and initial methylene blue concentration, using a central composite design (CCD), and the percentage of dye clearance were investigated. The response was modelled using a quadratic model based on the CCD. The analysis of variance revealed the most influential element on experimental design response (ANOVA). The temperature of 44.30C, pH of 7.1, biosorbent dose of 0.3 g, starting methylene blue concentration of 48.4 mg/L, and 84.26 percent dye removal were the best conditions for Musa acuminata (Banana leave powder). At these ideal conditions, the experimental percentage of biosorption was 76.93. The link between the estimated results of the developed ANN model and the experimental results defined the success of ANN modeling. As a result, the study's experimental results were found to be quite close to the model's predicted outcomes.

Keywords: Musa acuminata, central composite design, methylene blue, artificial neural network

Procedia PDF Downloads 41
129 Detection of Internal Mold Infection of Intact For Tomatoes by Non-Destructive, Transmittance VIS-NIR Spectroscopy

Authors: K. Petcharaporn, N. Prathengjit

Abstract:

The external characteristics of tomatoes, such as freshness, color and size are typically used in quality control processes for tomatoes sorting. However, the internal mold infection of intact tomato cannot be sorted based on a visible assessment and destructive method alone. In this study, a non-destructive technique was used to predict the internal mold infection of intact tomatoes by using transmittance visible and near infrared (VIS-NIR) spectroscopy. Spectra for 200 samples contained 100 samples for normal tomatoes and 100 samples for mold infected tomatoes were acquired in the wavelength range between 665-955 nm. This data was used in conjunction with partial least squares-discriminant analysis (PLS-DA) method to generate a classification model for tomato quality between groups of internal mold infection of intact tomato samples. For this task, the data was split into two groups, 140 samples were used for a training set and 60 samples were used for a test set. The spectra of both normal and internally mold infected tomatoes showed different features in the visible wavelength range. Combined spectral pretreatments of standard normal variate transformation (SNV) and smoothing (Savitzky-Golay) gave the optimal calibration model in training set, 85.0% (63 out of 71 for the normal samples and 56 out of 69 for the internal mold samples). The classification accuracy of the best model on the test set was 91.7% (29 out of 29 for the normal samples and 26 out of 31 for the internal mold tomato samples). The results from this experiment showed that transmittance VIS-NIR spectroscopy can be used as a non-destructive technique to predict the internal mold infection of intact tomatoes.

Keywords: tomato, mold, quality, prediction, transmittance

Procedia PDF Downloads 490
128 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance

Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian

Abstract:

Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.

Keywords: identification, Hammerstein-Wiener, optimization, quantization

Procedia PDF Downloads 235