Search results for: Maximum Independent subset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2341

Search results for: Maximum Independent subset

1771 C@sa: Intelligent Home Control and Simulation

Authors: Berardina De Carolis, Giovanni Cozzolongo

Abstract:

In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.

Keywords: Ambient intelligence, agent-based systems, influence diagrams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
1770 The Extremal Graph with the Largest Merrifield-Simmons Index of (n, n + 2)-graphs

Authors: M. S. Haghighat, A. Dolati, M. Tabari, E. Mohseni

Abstract:

The Merrifield-Simmons index of a graph G is defined as the total number of its independent sets. A (n, n + 2)-graph is a connected simple graph with n vertices and n + 2 edges. In this paper we characterize the (n, n+2)-graph with the largest Merrifield- Simmons index. We show that its Merrifield-Simmons index i.e. the upper bound of the Merrifield-Simmons index of the (n, n+2)-graphs is 9 × 2n-5 +1 for n ≥ 5.

Keywords: Merrifield-Simmons index, (n, n+2)-graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1250
1769 Heavy Metals Transport in the Soil Profiles under the Application of Sludge and Wastewater

Authors: A. Behbahaninia, S. A. Mirbagheri, A. H. Javid

Abstract:

Heavy metal transfer in soil profiles is a major environmental concern because even slow transport through the soil may eventually lead to deterioration of groundwater quality. The use of sewage sludge and effluents from wastewater treatment plants for irrigation of agricultural lands is on the rise particularly in peri-urban area of developing countries. In this study soil samples under sludge application and wastewater irrigation were studied and soil samples were collected in the soil profiles from the surface to 100 cm in depth. For this purpose, three plots were made in a treatment plant in south of Tehran-Iran. First plot was irrigated just with effluent from wastewater treatment plant, second plot with simulated heavy metals concentration equal 50 years irrigation and in third plot sewage sludge and effluent was used. Trace metals concentration (Cd, Cu) were determined for soil samples. The results indicate movement of metals was observed, but the most concentration of metals was found in topsoil samples. The most of Cadmium concentration was measured in the topsoil of plot 3, 4.5mg/kg and Maximum cadmium movement was observed in 0-20 cm. The most concentration of copper was 27.76mg/kg, and maximum percolation in 0-20 cm. Metals (Cd, Cu) were measured in leached water. Preferential flow and metal complexation with soluble organic apparently allow leaching of heavy metals.

Keywords: Heavy metal, sludge, soil, transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
1768 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm

Authors: Ali Ridho Barakbah, Yasushi Kiyoki

Abstract:

This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.

Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3362
1767 Development of Total Maximum Daily Load Using Water Quality Modelling as an Approach for Watershed Management in Malaysia

Authors: S. A. Che Osmi, W. M. F. Wan Ishak, H. Kim, M. A. Azman, M. A. Ramli

Abstract:

River is one of important water sources for many activities including industrial and domestic usage such as daily usage, transportation, power supply and recreational activities. However, increasing activities in a river has grown the sources of pollutant enters the water bodies, and degraded the water quality of the river. It becomes a challenge to develop an effective river management to ensure the water sources of the river are well managed and regulated. In Malaysia, several approaches for river management have been implemented such as Integrated River Basin Management (IRBM) program for coordinating the management of resources in a natural environment based on river basin to ensure their sustainability lead by Department of Drainage and Irrigation (DID), Malaysia. Nowadays, Total Maximum Daily Load (TMDL) is one of the best approaches for river management in Malaysia. TMDL implementation is regulated and implemented in the United States. A study on the development of TMDL in Malacca River has been carried out by doing water quality monitoring, the development of water quality model by using Environmental Fluid Dynamic Codes (EFDC), and TMDL implementation plan. The implementation of TMDL will help the stakeholders and regulators to control and improve the water quality of the river. It is one of the good approaches for river management in Malaysia.

Keywords: EFDC, river management, TMDL, water quality modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
1766 Effect of Thistle Ecotype in the Physical-Chemical and Sensorial Properties of Serra da Estrela Cheese

Authors: Raquel P. F. Guiné, Marlene I. C. Tenreiro, Ana C. Correia, Paulo Barracosa, Paula M. R. Correia

Abstract:

The objective of this study was to evaluate the physical and chemical characteristics of Serra da Estrela cheese and compare these results with those of the sensory analysis. For the study were taken six samples of Serra da Estrela cheese produced with 6 different ecotypes of thistle in a dairy situated in Penalva do Castelo. The chemical properties evaluated were moisture content, protein, fat, ash, chloride and pH; the physical properties studied were color and texture; and finally a sensory evaluation was undertaken. The results showed moisture varying in the range 40- 48%, protein in the range 15-20%, fat between 41-45%, ash between 3.9-5.0% and chlorides varying from 1.2 to 3.0%. The pH varied from 4.8 to 5.4. The textural properties revealed that the crust hardness is relatively low (maximum 7.3 N), although greater than flesh firmness (maximum 1.7 N), and also that these cheeses are in fact soft paste type, with measurable stickiness and intense adhesiveness. The color analysis showed that the crust is relatively light (L* over 50), and with a predominant yellow coloration (b* around 20 or over) although with a slight greenish tone (a* negative). The results of the sensory analysis did not show great variability for most of the attributes measured, although some differences were found in attributes such as crust thickness, crust uniformity, and creamy flesh.

Keywords: Chemical composition, color, sensorial analysis, Serra da Estrela cheese, texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
1765 Design of an Eddy Current Brake System for the Use of Roller Coasters Based on a Human Factors Engineering Approach

Authors: Adam L. Yanagihara, Yong Seok Park

Abstract:

The goal of this paper is to converge upon a design of a brake system that could be used for a roller coaster found at an amusement park. It was necessary to find what could be deemed as a “comfortable” deceleration so that passengers do not feel as if they are suddenly jerked and pressed against the restraining harnesses. A human factors engineering approach was taken in order to determine this deceleration. Using a previous study that tested the deceleration of transit vehicles, it was found that a -0.45 G deceleration would be used as a design requirement to build this system around. An adjustable linear eddy current brake using permanent magnets would be the ideal system to use in order to meet this design requirement. Anthropometric data were then used to determine a realistic weight and length of the roller coaster that the brake was being designed for. The weight and length data were then factored into magnetic brake force equations. These equations were used to determine how the brake system and the brake run layout would be designed. A final design for the brake was determined and it was found that a total of 12 brakes would be needed with a maximum braking distance of 53.6 m in order to stop a roller coaster travelling at its top speed and loaded to maximum capacity. This design is derived from theoretical calculations, but is within the realm of feasibility.

Keywords: Eddy current brake, engineering design, human factors engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160
1764 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
1763 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
1762 Thermodynamic Cycle Analysis for Overall Efficiency Improvement and Temperature Reduction in Gas Turbines

Authors: Jeni A. Popescu, Ionut Porumbel, Valeriu A. Vilag, Cleopatra F. Cuciumita

Abstract:

The paper presents a thermodynamic cycle analysis for three turboshaft engines. The first cycle is a Brayton cycle, describing the evolution of a classical turboshaft, based on the Klimov TV2 engine. The other four cycles aim at approaching an Ericsson cycle, by replacing the Brayton cycle adiabatic expansion in the turbine by quasi-isothermal expansion. The maximum quasi- Ericsson cycles temperature is set to a lower value than the maximum Brayton cycle temperature, equal to the Brayton cycle power turbine inlet temperature, in order to decrease the engine NOx emissions. Also, the power/expansion ratio distribution over the stages of the gas generator turbine is maintained the same. In two of the considered quasi-Ericsson cycles, the efficiencies of the gas generator turbine, as well as the power/expansion ratio distribution over the stages of the gas generator turbine are maintained the same as for the reference case, while for the other two cases, the efficiencies are increased in order to obtain the same shaft power as in the reference case. For the two cases respecting the first condition, both the shaft power and the thermodynamic efficiency of the engine decrease, while for the other two, the power and efficiency are maintained, as a result of assuming new, more efficient gas generator turbines.

Keywords: Combustion, Ericsson, thermodynamic analysis, turbine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2453
1761 Evaluation of Biofertilizer and Manure Effects on Quantitative Yield of Nigella sativa L.

Authors: Mohammad Reza Haj Seyed Hadi, Fereshteh Ghanepasand, Mohammad Taghi Darzi

Abstract:

The main objective of this study was to determine the effects of Nitrogen fixing bacteria and manure application on the seed yield and yield components in black cumin (Nigella sativa L.). The experiment was carried out at the RAN Research Station in Firouzkouh in 2012. A 4×4 factorial experiment, arranged in a randomized complete blocks designed with three replications. Nitrogen fixing bacteria at 4 levels (control, Azotobacter, Azospirillum and Azotobacter + Azospirillum) and manure application at 4 levels (0, 2.5, 5 and 7.5 ton ha-1) were used at this investigation. The present results have shown that the highest height, 1000 seeds weight, seed number per follicle, follicle yield, seed yield and harvest index were obtained after using Azotobacter and Azospirillum, simultaneously. Manure application only effects on follicle yield and by 5ton manure ha-1 the highest follicle yield obtained. Results of this investigation showed that the maximum seed yield obtained when Aotobacter+Azospirillum inoculated with black cumin seeds and 5 ton manure ha-1 applied. According to the results of this investigation the integrated management of Azotobacter and Azospirillum with manure application is the best treatment for achieving the maximum quantitative charactersitics of Black cumin.

Keywords: Azotobacter, azospirillum, black cumin, yield, yield components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
1760 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2639
1759 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: Asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
1758 An Approximation Method for Three Quark Systems in the Hyper-Spherical Approach

Authors: B. Rezaei, G. R. Boroun, M. Abdolmaleki

Abstract:

The bound state energy of three quark systems is studied in the framework of a non- relativistic spin independent phenomenological model. The hyper- spherical coordinates are considered for the solution this system. According to Jacobi coordinate, we determined the bound state energy for (uud) and (ddu) quark systems, as quarks are flavorless mass, and it is restrict that choice potential at low and high range in nucleon bag for a bound state.

Keywords: Adiabatic expansion, grand angular momentum, binding energy, perturbation, baryons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
1757 Systematics of Water Lilies (Genus Nymphaea L.) Using 18S rDNA Sequences

Authors: M. Nakkuntod, S. Srinarang, K.W. Hilu

Abstract:

Water lily (Nymphaea L.) is the largest genus of Nymphaeaceae. This family is composed of six genera (Nuphar, Ondinea, Euryale, Victoria, Barclaya, Nymphaea). Its members are nearly worldwide in tropical and temperate regions. The classification of some species in Nymphaea is ambiguous due to high variation in leaf and flower parts such as leaf margin, stamen appendage. Therefore, the phylogenetic relationships based on 18S rDNA were constructed to delimit this genus. DNAs of 52 specimens belonging to water lily family were extracted using modified conventional method containing cetyltrimethyl ammonium bromide (CTAB). The results showed that the amplified fragment is about 1600 base pairs in size. After analysis, the aligned sequences presented 9.36% for variable characters comprising 2.66% of parsimonious informative sites and 6.70% of singleton sites. Moreover, there are 6 regions of 1-2 base(s) for insertion/deletion. The phylogenetic trees based on maximum parsimony and maximum likelihood with high bootstrap support indicated that genus Nymphaea was a paraphyletic group because of Ondinea, Victoria and Euryale disruption. Within genus Nymphaea, subgenus Nymphaea is a basal lineage group which cooperated with Euryale and Victoria. The other four subgenera, namely Lotos, Hydrocallis, Brachyceras and Anecphya were included the same large clade which Ondinea was placed within Anecphya clade due to geographical sharing.

Keywords: nrDNA, phylogeny, taxonomy, Waterlily.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1116
1756 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

Authors: M. Analoui, M. Fadavi Amiri

Abstract:

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
1755 Impact of GCSC on Measured Impedance by Distance Relay in the Presence of Single Phase to Earth Fault

Authors: M. Zellagui, A. Chaghi

Abstract:

This paper presents the impact study of GTO Controlled Series Capacitor (GCSC) parameters on measured impedance (Zseen) by MHO distance relays for single transmission line high voltage 220 kV in the presence of single phase to earth fault with fault resistance (RF). The study deals with a 220 kV single electrical transmission line of Eastern Algerian transmission networks at Group Sonelgaz (Algerian Company of Electrical and Gas) compensated by series Flexible AC Transmission System (FACTS) i.e. GCSC connected at midpoint of the transmission line. The transmitted active and reactive powers are controlled by three GCSC-s. The effects of maximum reactive power injected as well as injected maximum voltage by GCSC on distance relays measured impedance is treated. The simulations results investigate the effects of GCSC injected parameters: variable reactance (XGCSC), variable voltage (VGCSC) and reactive power injected (QGCSC) on measured resistance and reactance in the presence of earth fault with resistance fault varied between 5 to 50 Ω for three cases study.

Keywords: GCSC Parameters, Transmission line, Earth fault, Symmetrical components, Distance protection, Measured impedance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941
1754 Microbial Oil Production by Mixed Culture of Microalgae Chlorella sp. KKU-S2 and Yeast Torulaspora maleeae Y30

Authors: Ratanaporn Leesing, Rattanaporn Baojungharn, Thidarat Papone

Abstract:

Compared to oil production from microorganisms, little work has been performed for mixed culture of microalgae and yeast. In this article it is aimed to show high oil accumulation potential of mixed culture of microalgae Chlorella sp. KKU-S2 and oleaginous yeast Torulaspora maleeae Y30 using sugarcane molasses as substrate. The monoculture of T. maleeae Y30 grew faster than that of microalgae Chlorella sp. KKU-S2. In monoculture of yeast, a biomass of 6.4g/L with specific growth rate (m) of 0.265 (1/d) and lipid yield of 0.466g/L were obtained, while 2.53g/L of biomass with m of 0.133 (1/d) and lipid yield of 0.132g/L were obtained for monoculture of Chlorella sp. KKU-S2. The biomass concentration in the mixed culture of T. maleeae Y30 with Chlorella sp. KKU-S2 increased faster and was higher compared with that in the monoculture and mixed culture of microalgae. In mixed culture of microalgae Chlorella sp. KKU-S2 and C. vulgaris TISTR8580, a biomass of 3.47g/L and lipid yield of 0.123 g/L were obtained. In mixed culture of T. maleeae Y30 with Chlorella sp. KKU-S2, a maximum biomass of 7.33 g/L and lipid yield of 0.808g/L were obtained. Maximum cell yield coefficient (YX/S, 0.229g/L), specific yield of lipid (YP/X, 0.11g lipid/g cells) and volumetric lipid production rate (QP, 0.115 g/L/d) were obtained in mixed culture of yeast and microalgae. Clearly, T. maleeae Y30 and Chlorella sp. KKU-S2 use sugarcane molasses as organic nutrients efficiently in mixed culture under mixotrophic growth. The biomass productivity and lipid yield are notably enhanced in comparison with monoculture.

Keywords: Microbial oil, Chlorella sp. KKU-S2, Chlorella vulgaris, Torulaspora maleeae Y30, mixed culture, biodiesel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
1753 Organizational De-Evolution; the Small Group or Single Actor Terrorist

Authors: Audrey Heffron, Casserleigh, Jarrett Broder, Brad Skillman

Abstract:

Traditionally, terror groups have been formed by ideologically aligned actors who perceive a lack of options for achieving political or social change. However, terrorist attacks have been increasingly carried out by small groups of actors or lone individuals who may be only ideologically affiliated with larger, formal terrorist organizations. The formation of these groups represents the inverse of traditional organizational growth, whereby structural de-evolution within issue-based organizations leads to the formation of small, independent terror cells. Ideological franchising – the bypassing of formal affiliation to the “parent" organization – represents the de-evolution of traditional concepts of organizational structure in favor of an organic, independent, and focused unit. Traditional definitions of dark networks that are issue-based include focus on an identified goal, commitment to achieving this goal through unrestrained actions, and selection of symbolic targets. The next step in the de-evolution of small dark networks is the miniorganization, consisting of only a handful of actors working toward a common, violent goal. Information-sharing through social media platforms, coupled with civil liberties of democratic nations, provide the communication systems, access to information, and freedom of movement necessary for small dark networks to flourish without the aid of a parent organization. As attacks such as the 7/7 bombings demonstrate the effectiveness of small dark networks, terrorist actors will feel increasingly comfortable aligning with an ideology only, without formally organizing. The natural result of this de-evolving organization is the single actor event, where an individual seems to subscribe to a larger organization-s violent ideology with little or no formal ties.

Keywords: Organizational de-evolution, single actor, small group, terrorism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
1752 The Prevalence of Organized Retail Crime in Riyadh, Saudi Arabia

Authors: Saleh Dabil

Abstract:

This study investigates the level of existence of organized retail crime in supermarkets of Riyadh, Saudi Arabia. The store managers, security managers and general employees were asked about the types of retail crimes occur in the stores. Three independent variables were related to the report of organized retail theft. The independent variables are: 1) the supermarket profile (volume, location, standard and type of the store), 2) the social physical environment of the store (maintenance, cleanness and overall organizational cooperation), 3) the security techniques and loss prevention electronics techniques used. The theoretical framework of this study based on the social disorganization theory. This study concluded that the organized retail theft, in specific, organized theft is moderately apparent in Riyadh stores. The general result showed that the environment of the stores has an effect on the prevalence of organized retail theft with relation to the gender of thieves, age groups, working shift, type of stolen items as well as the number of thieves in one case. Among other reasons, some factors of the organized theft are: economic pressure of customers based on the location of the store. The dealing of theft also was investigated to have a clear picture of stores dealing with organized retail theft. The result showed that mostly, thieves sent without any action and sometimes given written warning. Very few cases dealt with by police. There are other factors in the study can be looked up in the text. This study suggests solving the problem of organized theft; first, is "the well distributing of the duties and responsibilities between the employees especially for security purposes". Second "Installation of strong security system" and "Making well-designed store layout". Third is "giving training for general employees" and "to give periodically security skills training of employees". There are other suggestions in the study can be looked up in the text.

Keywords: Organized Crime, Retail, Theft, Loss prevention, Store environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2321
1751 Strong Limit Theorems for Dependent Random Variables

Authors: Libin Wu, Bainian Li

Abstract:

In This Article We establish moment inequality of dependent random variables,furthermore some theorems of strong law of large numbers and complete convergence for sequences of dependent random variables. In particular, independent and identically distributed Marcinkiewicz Law of large numbers are generalized to the case of m0-dependent sequences.

Keywords: Lacunary System, Generalized Gaussian, NA sequences, strong law of large numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
1750 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
1749 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
1748 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
1747 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
1746 Experimental Investigation of the Effect of Compression Ratio in a Direct Injection Diesel Engine Running on Different Blends of Rice Bran Oil and Ethanol

Authors: Perminderjit Singh, Randeep Singh

Abstract:

The performance, emission and combustion characteristics of a single cylinder four stroke variable compression ratio multi fuel engine when fueled with different blends of rice bran oil methyl ester and ethanol are investigated and compared with the results of standard diesel. Bio diesel produced from Rice bran oil by transesterification process has been used in this study. Experiment has been conducted at a fixed engine speed of 1500 rpm, 50% load and at compression ratios of 16.5:1, 17:1, 17.5:1 and 18:1. The impact of compression ratio on fuel consumption, brake thermal efficiency and exhaust gas emissions has been investigated and presented. Optimum compression ratio which gives best performance has been identified. The results indicate longer ignition delay, maximum rate of pressure rise, lower heat release rate and higher mass fraction burnt at higher compression ratio for waste cooking oil methyl ester when compared to that of diesel. The brake thermal efficiency at 50% load for Rice bran oil methyl ester blends and diesel has been calculated and the blend B40 is found to give maximum thermal efficiency. The blends when used as fuel results in reduction of carbon monoxide, hydrocarbon and increase in nitrogen oxides emissions.

Keywords: Biodiesel, Rice bran oil, Transesterification, Ethanol, Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3839
1745 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: Thailand tourism, maximum entropy bootstrapping approach, macroeconomic model, asymmetric information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
1744 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component

Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo

Abstract:

Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.

Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
1743 “Post-Industrial” Journalism as a Creative Industry

Authors: Lynette Sheridan Burns, Benjamin J. Matthews

Abstract:

The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.

Keywords: Creative industries, digital communication, journalism, post-industrial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908
1742 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151