Search results for: nonideal complex plasma
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6219

Search results for: nonideal complex plasma

4629 Evaluation of Insulin Sensitizing Effects of Different Fractions from Total Alcoholic Extract of Moringa oleifera Lam. Bark in Dexamethasone-Induced Insulin Resistant Rats

Authors: Hasanpasha N. Sholapur, Basanagouda M.Patil

Abstract:

Alcoholic extract of the bark of Moringa oleifera Lam. (MO), (Moringaceae), has been evaluated experimentally in the past for its insulin sensitizing potentials. In order to explore the possibility of the class of phytochemical(s) responsible for this experimental claim, the alcoholic extract was fractionated into non-polar [petroleum ether (PEF)], moderately non-polar [ethyl acetate (EAF)] and polar [aqueous (AQF)] fractions. All the fractions and pioglitazone (PIO) as standard (10mg/kg were p.o., once daily for 11 d) were investigated for their chronic effect on fasting plasma glucose, triglycerides, total cholesterol, insulin, oral glucose tolerance and acute effect on oral glucose tolerance in dexamethasone-induced (1 mg/kg s.c., once daily for 11 d) chronic model and acute model (1 mg/kg i.p., for 4 h) respectively for insulin resistance (IR) in rats. Among all the fractions tested, chronic treatment with EAF (140 mg/kg) and PIO (10 mg/kg) prevented dexamethasone-induced IR, indicated by prevention of hypertriglyceridemia, hyperinsulinemia and oral glucose intolerance, whereas treatment with AQF (95 mg/kg) prevented hepatic IR but not peripheral IR. In acute study single dose treatment with EAF (140 mg/kg) and PIO (10 mg/kg) prevented dexamethasone-induced oral glucose intolerance, fraction PEF did not show any effect on these parameters in both the models. The present study indicates that the triterpenoidal and the phenolic class of phytochemicals detected in EAF of alcoholic extract of MO bark may be responsible for the prevention of dexamethasone-induced insulin resistance in rats.

Keywords: Moringa oleifera, insulin resistance, dexamethasone, serum triglyceride, insulin, oral glucose tolerance test

Procedia PDF Downloads 372
4628 Smartphone-Based Human Activity Recognition by Machine Learning Methods

Authors: Yanting Cao, Kazumitsu Nawata

Abstract:

As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.

Keywords: smart sensors, human activity recognition, artificial intelligence, SVM

Procedia PDF Downloads 144
4627 Cellular Automata Using Fractional Integral Model

Authors: Yasser F. Hassan

Abstract:

In this paper, a proposed model of cellular automata is studied by means of fractional integral function. A cellular automaton is a decentralized computing model providing an excellent platform for performing complex computation with the help of only local information. The paper discusses how using fractional integral function for representing cellular automata memory or state. The architecture of computing and learning model will be given and the results of calibrating of approach are also given.

Keywords: fractional integral, cellular automata, memory, learning

Procedia PDF Downloads 413
4626 Phytoextraction of Copper and Zinc by Willow Varieties in a Pot Experiment

Authors: Muhammad Mohsin, Mir Md Abdus Salam, Pertti Pulkkinen, Ari Pappinen

Abstract:

Soil and water contamination by heavy metals is a major challenging issue for the environment. Phytoextraction is an emerging, environmentally friendly and cost-efficient technology in which plants are used to eliminate pollutants from the soil and water. We aimed to assess the copper (Cu) and zinc (Zn) removal efficiency by two willow varieties such as Klara (S. viminalis x S. schwerinii x S. dasyclados) and Karin ((S.schwerinii x S. viminalis) x (S. viminalis x S.burjatica)) under different soil treatments (control/unpolluted, polluted, lime with polluted, wood ash with polluted). In 180 days of pot experiment, these willow varieties were grown in a highly polluted soil collected from Pyhasalmi mining area in Finland. The lime and wood ash were added to the polluted soil to improve the soil pH and observe their effects on metals accumulation in plant biomass. The Inductively Coupled Plasma Optical Emission Spectrometer (ELAN 6000 ICP-EOS, Perkin-Elmer Corporation) was used in this study to assess the heavy metals concentration in the plant biomass. The result shows that both varieties of willow have the capability to accumulate the considerable amount of Cu and Zn varying from 36.95 to 314.80 mg kg⁻¹ and 260.66 to 858.70 mg kg⁻¹, respectively. The application of lime and wood ash substantially affected the stimulation of the plant height, dry biomass and deposition of Cu and Zn into total plant biomass. Besides, the lime application appeared to upsurge Cu and Zn concentrations in the shoots and leaves in both willow varieties when planted in polluted soil. However, wood ash application was found more efficient to mobilize the metals in the roots of both varieties. The study recommends willow plantations to rehabilitate the Cu and Zn polluted soils.

Keywords: heavy metals, lime, phytoextraction, wood ash, willow

Procedia PDF Downloads 237
4625 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
4624 Ergosterol Biosynthesis: Non-Conventional Method for Improving Process

Authors: Madalina Postaru, Alexandra Tucaliuc, Dan Cascaval, Anca Irina Galaction

Abstract:

Ergosterol (ergosta-5,7,22-trien-3β-ol) is the precursor of vitamin D2 (ergocalciferol), known as provitamin D2 as it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). As ergosterol is mainly accumulated in yeast cell membranes, especially in free form in the plasma-membrane, and the chemical synthesis of ergosterol does not represent an efficient method for its production, this study aimed to analyze the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. Our previous studies on ergosterol production by S. cerevisiae in batch and fed-batch fermentation systems indicated that the addition of n-dodecane led to the increase of almost 50% of this sterol concentration, the highest productivity being reached for the fed-batch process. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. In batch fermentation system, the study indicated that the oxygen mass transfer coefficient, kLa, is amplified for about 3 times by increasing the volumetric concentration of n-dodecane from 0 to 15%. Moreover, the increase of dissolved oxygen concentration by adding n-dodecane leads to the diminution for 3.5 times of the produced alcohol amount. In fed-batch fermentation process, the positive influence of hydrocarbon on oxygen transfer rate is amplified mainly at its higher concentration level, as the result of the increased yeasts cells amount. Thus, by varying n-dodecane concentration from 0 to 15% vol., the kLa value increase becomes more important than for the batch fermentation, being of 4 times

Keywords: ergosterol, yeast fermentation, n-dodecane, oxygen-vector

Procedia PDF Downloads 119
4623 Toward a Coalitional Subject in Contemporary American Feminist Literature

Authors: Su-Lin Yu

Abstract:

Coalition politics has been one of feminists’ persistent concerns. Following recent feminist discussion on new modes of affiliation across difference, she will explore how the process of female subject formation depends on alliances across different cultural locations. First, she will examine how coalition politics is reformulated across difference in contemporary feminist literature. In particular, the paper will identify the particular contexts and locations in which coalition building both enables and constrains the female subject. She will attempt to explore how contemporary feminist literature highlights the possibilities and limitations for solidarity and affiliations. To understand coalition politics in contemporary feminist works, she will engage in close readings of two texts: Rebecca Walker’s Black, White and Jewish: Memoir of a Shifting Self and Danzy Senna’s Caucasia. Both Walker and Senna have articulated the complex nodes of identity that are staged by a politics of location as they refuse to be boxed into simplistic essentialist positions. Their texts are characterized by the characters’ racial ambiguity and their social and geographical mobility of life in the contemporary United States. Their experiences of living through conflictual and contradictory relationships never fully fit the boundaries of racial categorization. Each of these texts demonstrates the limits as well as the possibilities of working with diversity among and within persons and groups, thus, laying the ground for complex alliance formation. Because each of the protagonists must negotiate a set of contradictions, they will have to constantly shift their affiliations. Rather than construct a static alliance, they describe a process of moving ‘beyond boundaries,’ an embracing of multiple locations. As self-identified third wavers, Rebecca Walker and Danzy Senna have been identified and marked with the status of ‘leader’ by the feminist establishment and by mainstream U.S. media. Their texts have captured both mass popularity and critical attention in the feminist and, often, the non-feminist literary community. By analyzing these texts, she will show how contemporary American feminist literature reveals coalition politics which is fraught with complications and unintended consequences. Taken as a whole, then, these works provide an important examination not only of coalition politics of American feminism, but also a snapshot of a central debate among feminist critique of coalition politics as a whole.

Keywords: coalition politics, contemporary women’s literature, identity, female subject

Procedia PDF Downloads 294
4622 Site Specific Nutrient Management Need in India Now

Authors: A. H. Nanher, N. P. Singh, Shashidhar Yadav, Sachin Tyagi

Abstract:

Agricultural production system is an outcome of a complex interaction of seed, soil, water and agro-chemicals (including fertilizers). Therefore, judicious management of all the inputs is essential for the sustainability of such a complex system. Precision agriculture gives farmers the ability to use crop inputs more effectively including fertilizers, pesticides, tillage and irrigation water. More effective use of inputs means greater crop yield and/or quality, without polluting the environment the focus on enhancing the productivity during the Green Revolution coupled with total disregard of proper management of inputs and without considering the ecological impacts, has resulted into environmental degradation. To evaluate a new approach for site-specific nutrient management (SSNM). Large variation in initial soil fertility characteristics and indigenous supply of N, P, and K was observed among Field- and season-specific NPK applications were calculated by accounting for the indigenous nutrient supply, yield targets, and nutrient demand as a function of the interactions between N, P, and K. Nitrogen applications were fine-tuned based on season-specific rules and field-specific monitoring of crop N status. The performance of SSNM did not differ significantly between high-yielding and low-yielding climatic seasons, but improved over time with larger benefits observed in the second year Future, strategies for nutrient management in intensive rice systems must become more site-specific and dynamic to manage spatially and temporally variable resources based on a quantitative understanding of the congruence between nutrient supply and crop demand. The SSNM concept has demonstrated promising agronomic and economic potential. It can be used for managing plant nutrients at any scale, i.e., ranging from a general recommendation for homogenous management of a larger domain to true management of between-field variability. Assessment of pest profiles in FFP and SSNM plots suggests that SSNM may also reduce pest incidence, particularly diseases that are often associated with excessive N use or unbalanced plant nutrition.

Keywords: nutrient, pesticide, crop, yield

Procedia PDF Downloads 430
4621 Removal of Nickel and Vanadium from Crude Oil by Using Solvent Extraction and Electrochemical Process

Authors: Aliya Kurbanova, Nurlan Akhmetov, Abilmansur Yeshmuratov, Yerzhigit Sugurbekov, Ramiz Zulkharnay, Gulzat Demeuova, Murat Baisariyev, Gulnar Sugurbekova

Abstract:

Last decades crude oils have tended to become more challenge to process due to increasing amounts of sour and heavy crude oils. Some crude oils contain high vanadium and nickel content, for example Pavlodar LLP crude oil, which contains more than 23.09 g/t nickel and 58.59 g/t vanadium. In this study, we used two types of metal removing methods such as solvent extraction and electrochemical. The present research is conducted for comparative analysis of the deasphalting with organic solvents (cyclohexane, carbon tetrachloride, chloroform) and electrochemical method. Applying the cyclic voltametric analysis (CVA) and Inductively coupled plasma mass spectrometry (ICP MS), these mentioned types of metal extraction methods were compared in this paper. Maximum efficiency of deasphalting, with cyclohexane as the solvent, in Soxhlet extractor was 66.4% for nickel and 51.2% for vanadium content from crude oil. Percentage of Ni extraction reached maximum of approximately 55% by using the electrochemical method in electrolysis cell, which was developed for this research and consists of three sections: oil and protonating agent (EtOH) solution between two conducting membranes which divides it from two capsules of 10% sulfuric acid and two graphite electrodes which cover all three parts in electrical circuit. Ions of metals pass through membranes and remain in acid solutions. The best result was obtained in 60 minutes with ethanol to oil ratio 25% to 75% respectively, current fits into the range from 0.3A to 0.4A, voltage changed from 12.8V to 17.3V.

Keywords: demetallization, deasphalting, electrochemical removal, heavy metals, petroleum engineering, solvent extraction

Procedia PDF Downloads 326
4620 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
4619 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
4618 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 107
4617 Robust Control of a Parallel 3-RRR Robotic Manipulator via μ-Synthesis Method

Authors: A. Abbasi Moshaii, M. Soltan Rezaee, M. Mohammadi Moghaddam

Abstract:

Control of some mechanisms is hard because of their complex dynamic equations. If part of the complexity is resulting from uncertainties, an efficient way for solving that is robust control. By this way, the control procedure could be simple and fast and finally, a simple controller can be designed. One kind of these mechanisms is 3-RRR which is a parallel mechanism and has three revolute joints. This paper aims to robust control a 3-RRR planner mechanism and it presents that this could be used for other mechanisms. So, a significant problem in mechanisms control could be solved. The relevant diagrams are drawn and they show the correctness of control process.

Keywords: 3-RRR, dynamic equations, mechanisms control, structural uncertainty

Procedia PDF Downloads 558
4616 Spatio-Temporal Analysis of Land Use Change and Green Cover Index

Authors: Poonam Sharma, Ankur Srivastav

Abstract:

Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.

Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning

Procedia PDF Downloads 277
4615 Interaction Between Gut Microorganisms and Endocrine Disruptors - Effects on Hyperglycaemia

Authors: Karthika Durairaj, Buvaneswari G., Gowdham M., Gilles M., Velmurugan G.

Abstract:

Background: Hyperglycaemia is the primary cause of metabolic illness. Recently, researchers focused on the possibility that chemical exposure could promote metabolic disease. Hyperglycaemia causes a variety of metabolic diseases dependent on its etiologic conditions. According to animal and population-based research, individual chemical exposure causes health problems through alteration of endocrine function with the influence of microbial influence. We were intrigued by the function of gut microbiota variation in high fat and chemically induced hyperglycaemia. Methodology: C57/Bl6 mice were subjected to two different treatments to generate the etiologic-based diabetes model: I – a high-fat diet with a 45 kcal diet, and II - endocrine disrupting chemicals (EDCs) cocktail. The mice were monitored periodically for changes in body weight and fasting glucose. After 120 days of the experiment, blood anthropometry, faecal metagenomics and metabolomics were performed and analyzed through statistical analysis using one-way ANOVA and student’s t-test. Results: After 120 days of exposure, we found hyperglycaemic changes in both experimental models. The treatment groups also differed in terms of plasma lipid levels, creatinine, and hepatic markers. To determine the influence on glucose metabolism, microbial profiling and metabolite levels were significantly different between groups. The gene expression studies associated with glucose metabolism vary between hosts and their treatments. Conclusion: This research will result in the identification of biomarkers and molecular targets for better diabetes control and treatment.

Keywords: hyperglycaemia, endocrine-disrupting chemicals, gut microbiota, host metabolism

Procedia PDF Downloads 41
4614 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 144
4613 Maximizing the Aerodynamic Performance of Wind and Water Turbines by Utilizing Advanced Flow Control Techniques

Authors: Edwin Javier Cortes, Surupa Shaw

Abstract:

In recent years, there has been a growing emphasis on enhancing the efficiency and performance of wind and water turbines to meet the increasing demand for sustainable energy sources. One promising approach is the utilization of advanced flow control techniques to optimize aerodynamic performance. This paper explores the application of advanced flow control techniques in both wind and water turbines, aiming to maximize their efficiency and output. By manipulating the flow of air or water around the turbine blades, these techniques offer the potential to improve energy capture, reduce drag, and minimize turbulence-induced losses. The paper will review various flow control strategies, including passive and active techniques such as vortex generators, boundary layer suction, and plasma actuators. It will examine their effectiveness in optimizing turbine performance under different operating conditions and environmental factors. Furthermore, the paper will discuss the challenges and opportunities associated with implementing these techniques in practical turbine designs. It will consider factors such as cost-effectiveness, reliability, and scalability, as well as the potential impact on overall turbine efficiency and lifecycle. Through a comprehensive analysis of existing research and case studies, this paper aims to provide insights into the potential benefits and limitations of advanced flow control techniques for wind and water turbines. It will also highlight areas for future research and development, with the ultimate goal of advancing the state-of-the-art in turbine technology and accelerating the transition towards a more sustainable energy future.

Keywords: flow control, efficiency, passive control, active control

Procedia PDF Downloads 70
4612 Intersectionality and Sensemaking: Advancing the Conversation on Leadership as the Management of Meaning

Authors: Clifford Lewis

Abstract:

This paper aims to advance the conversation of an alternative view of leadership, namely ‘leadership as the management of meaning’. Here, leadership is considered as a social process of the management of meaning within an employment context, as opposed to a psychological trait, set of behaviours or relational consequence as seen in mainstream leadership research. Specifically, this study explores the relationship between intersectional identities and the management of meaning. Design: Semi-structured, one-on-one interviews were conducted with women and men of colour working in the South African private sector organisations in various leadership positions. Employing an intersectional approach using gender and race, participants were selected by using purposive and snowball sampling concurrently. Thematic and Axial coding was used to identify dominant themes. Findings: Findings suggest that, both gender and race shape how leaders manage meaning. Findings also confirm that intersectionality is an appropriate approach when studying the leadership experiences of those groups who are underrepresented in organisational leadership structures. The findings points to the need for further research into the differential effects of intersecting identities on organisational leadership experiences and that ‘leadership as the management of meaning’ is an appropriate approach for addressing this knowledge gap. Theoretical Contribution: There is a large body of literature on the complex challenges faced by women and people of colour in leadership but there is relatively little empirical work on how identity influences the management of meaning. This study contributes to the leadership literature by providing insight into how intersectional identities influence the management of meaning at work and how this impacts the leadership experiences of largely marginalised groups. Practical Implications: Understanding the leadership experiences of underrepresented groups is important because of both legal mandates and for building diverse talent for organisations and societies. Such an understanding assists practitioners in being sensitive to simplistic notions of challenges individuals might face in accessing and practicing leadership in organisations. Advancing the conversation on leadership as the management of meaning allows for a better understanding of complex challenges faced by women and people of colour and an opportunity for organisations to systematically remove unfair structural obstacles and develop their diverse leadership capacity.

Keywords: intersectionality, diversity, leadership, sensemaking

Procedia PDF Downloads 272
4611 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 101
4610 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 149
4609 Supramolecular Approach towards Novel Applications: Battery, Band Gap and Gas Separation

Authors: Sudhakara Naidu Neppalli, Tejas S. Bhosale

Abstract:

It is well known that the block copolymer (BCP) can form a complex molecule, through non-covalent bonds such as hydrogen bond, ionic bond and co-ordination bond, with low molecular weight compound as well as with macromolecules, which provide vast applications, includes the alteration of morphology and properties of polymers. Hence we covered the research that, the importance of non-covalent bonds in increasing the non-favourable segmental interactions of the blocks was well examined by attaching and detaching the bonds between the BCP and additive. We also monitored the phase transition of block copolymer and effective interaction parameter (χeff) for Li-doped polymers using small angle x-ray scattering and transmission electron microscopy. The effective interaction parameter (χeff) between two block components was evaluated using Leibler theory based on the incompressible random phase approximation (RPA) for ionized BCP in a disordered state. Furthermore, conductivity experiments demonstrate that the ionic conductivity in the samples quenched from the different structures is morphology-independent, while it increases with increasing ion salt concentration. Morphological transitions, interaction parameter, and thermal stability also examined in quarternized block copolymer. D-spacing was used to estimate effective interaction parameter (χeff) of block components in weak and strong segregation regimes of ordered phase. Metal-containing polymer has been the topic of great attention in recent years due to their wide range of potential application. Similarly, metal- ligand complex is used as a supramolecular linker between the polymers giving rise to a ‘Metallo-Supramolecule assembly. More precisely, functionalized polymer end capped with 2, 2’:6’, 2”- terpyridine ligand can be selectively complexed with wide range of transition metal ions and then subsequently attached to other terpyridine terminated polymer block. In compare to other supramolecular assembly, BCP involved metallo-supramolecule assembly offers vast applications such as optical activity, electrical conductivity, luminescence and photo refractivity.

Keywords: band gap, block copolymer, conductivity, interaction parameter, phase transition

Procedia PDF Downloads 169
4608 Synchronization of Semiconductor Laser Networks

Authors: R. M. López-Gutiérrez, L. Cardoza-Avendaño, H. Cervantes-de Ávila, J. A. Michel-Macarty, C. Cruz-Hernández, A. Arellano-Delgado, R. Carmona-Rodríguez

Abstract:

In this paper, synchronization of multiple chaotic semiconductor lasers is achieved by appealing to complex system theory. In particular, we consider dynamical networks composed by semiconductor laser, as interconnected nodes, where the interaction in the networks are defined by coupling the first state of each node. An interesting case is synchronized with master-slave configuration in star topology. Nodes of these networks are modeled for the laser and simulated by Matlab. These results are applicable to private communication.

Keywords: chaotic laser, network, star topology, synchronization

Procedia PDF Downloads 566
4607 Building Envelope Engineering and Typologies for Complex Architectures: Composition and Functional Methodologies

Authors: Massimiliano Nastri

Abstract:

The study examines the façade systems according to the constitutive and typological characters, as well as the functional and applicative requirements such as the expressive, constructive, and interactive criteria towards the environmental, perceptive, and energy conditions. The envelope systems are understood as instruments of mediation, interchange, and dynamic interaction between environmental conditions. The façades are observed for the sustainable concept of eco-efficient envelopes, selective and multi-purpose filters, adaptable and adjustable according to the environmental performance.

Keywords: typologies of façades, environmental and energy sustainability, interaction and perceptive mediation, technical skins

Procedia PDF Downloads 152
4606 Hybrid-Nanoengineering™: A New Platform for Nanomedicine

Authors: Mewa Singh

Abstract:

Nanomedicine, a fusion of nanotechnology and medicine, is an emerging technology ideally suited to the targeted therapies. Nanoparticles overcome the low selectivity of anti-cancer drugs toward the tumor as compared to normal tissue and hence result-in less severe side-effects. Our new technology, HYBRID-NANOENGINEERING™, uses a new molecule (MR007) in the creation of nanoparticles that not only helps in nanonizing the medicine but also provides synergy to the medicine. The simplified manufacturing process will result in reduced manufacturing costs. Treatment is made more convenient because hybrid nanomedicines can be produced in oral, injectable or transdermal formulations. The manufacturing process uses no protein, oil or detergents. The particle size is below 180 nm with a narrow distribution of size. Importantly, these properties confer great stability of the structure. The formulation does not aggregate in plasma and is stable over a wide range of pH. The final hybrid formulation is stable for at least 18 months as a powder. More than 97 drugs, including paclitaxel, docetaxel, tamoxifen, doxorubicinm prednisone, and artemisinin have been nanonized in water soluble formulations. Preclinical studies on cell cultures of tumors show promising results. Our HYBRID-NANOENGINEERING™ platform enables the design and development of hybrid nano-pharmaceuticals that combine efficacy with tolerability, giving patients hope for both extended overall survival and improved quality of life. This study would discuss or present this new discovery of HYBRID-NANOENGINEERING™ which targets drug delivery, synergistic, and potentiating effects, and barriers of drug delivery and advanced drug delivery systems.

Keywords: nano-medicine, nano-particles, drug delivery system, pharmaceuticals

Procedia PDF Downloads 486
4605 Effects of Whole-Body Vibration Training on Fibrinolytic and Coagulative Factors in Healthy Young Man

Authors: Farshad Ghazalian, Seyed Hossein Alavi

Abstract:

Background: Use of whole body vibration (WBV) as an exercise method has rapidly increased over the last decade. The aim of this study was to evaluate effects of five week whole-body vibration training with different amplitudes and progressive frequencies on fibrinolytic and coagulative factors. Methods: Twenty five healthy male students were divided randomly in three groups: high amplitude vibration group (n=10), low amplitude vibration group (n=10), and control group (n=5). The vibration training consisted of 5 week whole-body vibration 3 times a week with amplitudes 4 and 2 mm and progressive frequencies from 25Hz with increments of 5Hz weekly. Concentrations of fibrinogen, plasminogen, tPA, and PAI-1 before and after 5 weeks of training were measured in plasma samples. Statistical analysis was done using one way analysis of variance. In order to compare pre-test with post test we used Wilcoxon signed ranked test .P<0.05 was considered statistically significant. Results: The 5 week high amplitude vibration training caused a significant improvement in tissue plasminogen activator (tPA) (p=0.028), and PAI-1 (p=0.033), fibrinogen showed decrease albeit not significantly (p=0.052). Plasminogen showed decrease not significantly (p=0.508). Low-amplitude vibration training caused a significant improvement in tissue plasminogen activator (tPA) (p=0.006) and and PAI-1 showed decrease not significantly (p=0.907). Fibrinogen showed decrease albeit not significantly (p=0.19). Plasminogen showed decrease not significantly (p=0.095). However, between groups there was no significant effect on tissue plasminogen activator (tPA) (p = 0.50), PAI-1 (p=0.249), Plasminogen (p=0.742), and fibrinogen (p=0.299). Conclusion: Amplitude of vibrations training is a important variable that effect on fibrino lytic factors.

Keywords: vibration, fibrinolysis, blood coagulation, plasminogen

Procedia PDF Downloads 404
4604 Influence of the Cooking Technique on the Iodine Content of Frozen Hake

Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre

Abstract:

The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.

Keywords: cooking process, ICP-MS, iodine, hake

Procedia PDF Downloads 142
4603 Culture of Primary Cortical Neurons on Hydrophobic Nanofibers Induces the Formation of Organoid-Like Structures

Authors: Nick Weir, Robert Stevens, Alan Hargreaves, Martin McGinnity, Chris Tinsley

Abstract:

Hydrophobic materials have previously demonstrated the ability to elevate cell-cell interactions and promote the formation of neural networks whilst aligned nanofibers demonstrate the ability to induce extensive neurite outgrowth in an aligned manner. Hydrophobic materials typically elicit an immune response upon implantation and thus materials used for implantation are typically hydrophilic. Poly-L-lactic acid (PLLA) is a hydrophobic, non-immunogenic, FDA approved material that can be electrospun to form aligned nanofibers. Primary rat cortical neurons cultured for 10 days on aligned PLLA nanofibers formed 3D cell clusters, approximately 800 microns in diameter. Neurites that extended from these clusters were highly aligned due to the alignment of the nanofibers they were cultured upon and fasciculation was also evident. Plasma treatment of the PLLA nanofibers prior to seeding of cells significantly reduced the hydrophobicity and abolished the cluster formation and neurite fasciculation, whilst reducing the extent and directionality of neurite outgrowth; it is proposed that hydrophobicity induces the changes to cellular behaviors. Aligned PLLA nanofibers induced the formation of a structure that mimics the grey-white matter compartmentalization that is observed in vivo and thus represents a step forward in generating organoids or biomaterial-based implants. Upon implantation into the brain, the biomaterial architectures described here may provide a useful platform for both brain repair and brain remodeling initiatives.

Keywords: hydrophobicity, nanofibers, neurite fasciculation, neurite outgrowth, PLLA

Procedia PDF Downloads 160
4602 Impacts on Marine Ecosystems Using a Multilayer Network Approach

Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade

Abstract:

Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.

Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management

Procedia PDF Downloads 113
4601 Stakeholder Perceptions of Wildlife Tourism in Communal Conservancies within the Mudumu North Complex, Zambezi Region, Namibia

Authors: Shimhanda M. N., Mogomotsi P. K., Thakadu O. T., Rutina L. P.

Abstract:

Wildlife tourism (WT) in communal conservancies has the potential to contribute significantly to sustainable rural development. However, understanding local perceptions, promoting participation, and addressing stakeholder concerns are all required for sustainability. This study looks at stakeholder perceptions of WT in conservancies near protected areas in Namibia's Zambezi region, specifically the Mudumu North Complex. A mixed-methods approach was employed to collect data from 356 households using stratified sampling. Qualitative data was gathered through six focus group discussions and 22 key informant interviews. Quantitative analysis, using descriptive statistics and Spearman correlation, investigated socio-demographic influences on WT perceptions, while qualitative data were subjected to thematic analysis to identify key themes. Results revealed high awareness and generally positive perceptions of WT, particularly in Mashi Conservancy, which benefits from diverse tourism activities and joint ventures with lodges. Kwandu and Kyaramacan, which rely heavily on consumptive tourism, had lower awareness and perceived benefits. Human-wildlife conflict emerged as a persistent issue, especially in Kwandu and Mashi, where crop damage and wildlife interference undermined community support for WT. Younger, more educated, and employed individuals held more positive attitudes towards WT. The study highlights the importance of recognising community heterogeneity and tailoring WT strategies to meet diverse needs, including HWC mitigation. Policy implications include increasing community engagement, ensuring equitable benefit distribution, and implementing inclusive tourism strategies that promote long-term sustainability. These findings are critical for developing long-term WT models that address local challenges, encourage community participation, and contribute to socioeconomic development and conservation goals.

Keywords: sustainable tourism, stakeholder perceptions, community involvement, socio-economic development

Procedia PDF Downloads 17
4600 On the Zeros of the Degree Polynomial of a Graph

Authors: S. R. Nayaka, Putta Swamy

Abstract:

Graph polynomial is one of the algebraic representations of the Graph. The degree polynomial is one of the simple algebraic representations of graphs. The degree polynomial of a graph G of order n is the polynomial Deg(G, x) with the coefficients deg(G,i) where deg(G,i) denotes the number of vertices of degree i in G. In this article, we investigate the behavior of the roots of some families of Graphs in the complex field. We investigate for the graphs having only integral roots. Further, we characterize the graphs having single roots or having real roots and behavior of the polynomial at the particular value is also obtained.

Keywords: degree polynomial, regular graph, minimum and maximum degree, graph operations

Procedia PDF Downloads 249