Search results for: complex urban geometry
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2781

Search results for: complex urban geometry

201 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production

Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia

Abstract:

Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.

Keywords: Direct steam generation, parabolic trough collectors, pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
200 Fertigation Use in Agriculture and Biosorption of Residual Nitrogen by Soil Microorganisms

Authors: A. Irina Mikajlo, B. Jakub Elbl, C. Antonín Kintl, D. Jindřich Kynický, E. Martin Brtnický, F. Jaroslav Záhora

Abstract:

Present work deals with the possible use of fertigation in agriculture and its impact on the availability of mineral nitrogen (Nmin) in topsoil and subsoil horizons. The aim of the present study is to demonstrate the effect of the organic matter presence in fertigation on microbial transformation and availability of mineral nitrogen forms. The main investigation reason is the potential use of pretreated waste water, as a source of organic carbon (Corg) and residual nutrients (Nmin) for fertigation. Laboratory experiment has been conducted to demonstrate the effect of the arable land fertilization method on the Nmin availability in different depths of the soil with the usage of model experimental containers filled with soil from topsoil and podsoil horizons that were taken from the precise area. Tufted hairgrass (Deschampsia caespitosa) has been chosen as a model plant. The water source protection zone Brezova nad Svitavou has been a research area where significant underground reservoirs of drinking water of the highest quality are located. From the second half of the last century local sources of drinking water show nitrogenous compounds increase that get here almost only from arable lands. Therefore, an attention of the following text focuses on the fate of mineral nitrogen in the complex plant-soil. Research results show that the fertigation application with Corg in a combination with mineral fertilizer can reduce the amount of Nmin leached from topsoil horizon of agricultural soils. In addition, some plants biomass production reduces may occur.

Keywords: Fertigation, fertilizers, mineral nitrogen, soil microorganisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
199 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design

Authors: Padmanabhan Balasubramanian, Ryuta Arisaka

Abstract:

Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.

Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
198 Simulating Human Behavior in (Un)Built Environments: Using an Actor Profiling Method

Authors: Hadas Sopher, Davide Schaumann, Yehuda E. Kalay

Abstract:

This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.

Keywords: Agent based modeling, architectural design evaluation, event modeling, human behavior simulation, spatial cognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034
197 ZMP Based Reference Generation for Biped Walking Robots

Authors: Kemalettin Erbatur, Özer Koca, Evrim Taşkıran, Metin Yılmaz, Utku Seven

Abstract:

Recent fifteen years witnessed fast improvements in the field of humanoid robotics. The human-like robot structure is more suitable to human environment with its supreme obstacle avoidance properties when compared with wheeled service robots. However, the walking control for bipedal robots is a challenging task due to their complex dynamics. Stable reference generation plays a very important role in control. Linear Inverted Pendulum Model (LIPM) and the Zero Moment Point (ZMP) criterion are applied in a number of studies for stable walking reference generation of biped walking robots. This paper follows this main approach too. We propose a natural and continuous ZMP reference trajectory for a stable and human-like walk. The ZMP reference trajectories move forward under the sole of the support foot when the robot body is supported by a single leg. Robot center of mass trajectory is obtained from predefined ZMP reference trajectories by a Fourier series approximation method. The Gibbs phenomenon problem common with Fourier approximations of discontinuous functions is avoided by employing continuous ZMP references. Also, these ZMP reference trajectories possess pre-assigned single and double support phases, which are very useful in experimental tuning work. The ZMP based reference generation strategy is tested via threedimensional full-dynamics simulations of a 12-degrees-of-freedom biped robot model. Simulation results indicate that the proposed reference trajectory generation technique is successful.

Keywords: Biped robot, Linear Inverted Pendulum Model, Zero Moment Point, Fourier series approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
196 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: 5G, millimetre wavebands, super high-frequency band, SINR, signal-to-interference-plus-noise ratio, cost benefit analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 670
195 Impact of Computer-Mediated Communication on Virtual Teams- Performance: An Empirical Study

Authors: Nadeem Ehsan, Ebtisam Mirza, Muhammad Ahmad

Abstract:

In a complex project environment, project teams face multi-dimensional communication problems that can ultimately lead to project breakdown. Team Performance varies in Face-to-Face (FTF) environment versus groups working remotely in a computermediated communication (CMC) environment. A brief review of the Input_Process_Output model suggested by James E. Driskell, Paul H. Radtke and Eduardo Salas in “Virtual Teams: Effects of Technological Mediation on Team Performance (2003)", has been done to develop the basis of this research. This model theoretically analyzes the effects of technological mediation on team processes, such as, cohesiveness, status and authority relations, counternormative behavior and communication. An empirical study described in this paper has been undertaken to test the “cohesiveness" of diverse project teams in a multi-national organization. This study uses both quantitative and qualitative techniques for data gathering and analysis. These techniques include interviews, questionnaires for data collection and graphical data representation for analyzing the collected data. Computer-mediated technology may impact team performance because of difference in cohesiveness among teams and this difference may be moderated by factors, such as, the type of communication environment, the type of task and the temporal context of the team. Based on the reviewed model, sets of hypotheses are devised and tested. This research, reports on a study that compared team cohesiveness among virtual teams using CMC and non-CMC communication mediums. The findings suggest that CMC can help virtual teams increase team cohesiveness among their members, making CMC an effective medium for increasing productivity and team performance.

Keywords: Computer-mediated Communication, Virtual Teams, Team Performance, Team Cohesiveness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2300
194 Estimation of Exhaust and Non-Exhaust Particulate Matter Emissions’ Share from On-Road Vehicles in Addis Ababa City

Authors: Solomon Neway Jida, Jean-Francois Hetet, Pascal Chesse

Abstract:

Vehicular emission is the key source of air pollution in the urban environment. This includes both fine particles (PM2.5) and coarse particulate matters (PM10). However, particulate matter emissions from road traffic comprise emissions from exhaust tailpipe and emissions due to wear and tear of the vehicle part such as brake, tire and clutch and re-suspension of dust (non-exhaust emission). This study estimates the share of the two sources of pollutant particle emissions from on-roadside vehicles in the Addis Ababa municipality, Ethiopia. To calculate its share, two methods were applied; the exhaust-tailpipe emissions were calculated using the Europeans emission inventory Tier II method and Tier I for the non-exhaust emissions (like vehicle tire wear, brake, and road surface wear). The results show that of the total traffic-related particulate emissions in the city, 63% emitted from vehicle exhaust and the remaining 37% from non-exhaust sources. The annual roads transport exhaust emission shares around 2394 tons of particles from all vehicle categories. However, from the total yearly non-exhaust particulate matter emissions’ contribution, tire and brake wear shared around 65% and 35% emanated by road-surface wear. Furthermore, vehicle tire and brake wear were responsible for annual 584.8 tons of coarse particles (PM10) and 314.4 tons of fine particle matter (PM2.5) emissions in the city whereas surface wear emissions were responsible for around 313.7 tons of PM10 and 169.9 tons of PM2.5 pollutant emissions in the city. This suggests that non-exhaust sources might be as significant as exhaust sources and have a considerable contribution to the impact on air quality.

Keywords: Addis Ababa, automotive emission, emission estimation, particulate matters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 690
193 Perceptions and Attitudes towards Infant-s Physical Health and Caring: Immigrants and Native Born Mothers

Authors: Orly Sarid, Yana Shraga

Abstract:

Purpose: To compare attitudes and perceptions of Israeli native born mothers versus former Soviet Union (FSU) immigrant mothers regarding the physical health of their infant. Methodology: cross-sectional design. A convenience sample of 50 participants was recruited by face to face and snowball technique. A questionnaire was constructed according to the instructions of the Ministry of Health for the care and treatment of infants. The main areas explored were: sources of knowledge that the young mother acquired regarding the care of her infant, ways of caring for the infant, hygiene and sanitary habits, and the pattern of referral to health professionals. The last topic relates to emotions mothers might experience towards their infant. Results: Mothers from both cultural groups present some similar caring behaviors, which may express a universal aspect of mothers' behavior towards their infants. However, immigrant mothers differ significantly from native born by relying less on their mothers' and grandmothers' experience, they wean their infants from diapers earlier, they are stricter about hygiene and sanitary habits and they tend to consult a physician when their infant has low fever. Native born and immigrant mothers differ in their expressions of pride and wonder. Immigrant mothers report of a lesser degree of these emotions towards their infants than native born mothers. Conclusion: The theoretical model of socialization and acculturation of immigrant mothers is employed as an explanatory model for the current findings Young immigrant mothers undergo a complex acculturation process and adapt behavioral patterns in various areas to comply with Israeli norms and values, demonstrating assimilation. In other areas they adhere to the norms of their original culture.

Keywords: Attitudes, immigrant mothers, infant, physical health

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
192 Holistic Approach to Assess the Potential of Using Traditional and Advance Insulation Materials for Energy Retrofit of Office Buildings

Authors: Marco Picco, Mahmood Alam

Abstract:

Improving the energy performance of existing buildings can be challenging, particularly when facades cannot be modified, and the only available option is internal insulation. In such cases, the choice of the most suitable material becomes increasingly complex, as in addition to thermal transmittance and capital cost, the designer needs to account for the impact of the intervention on the internal spaces, and in particular the loss of usable space due to the additional layers of materials installed. This paper explores this issue by analyzing a case study of an average office building needing to go through a refurbishment in order to reach the limits imposed by current regulations to achieve energy efficiency in buildings. The building is simulated through dynamic performance simulation under three different climate conditions in order to evaluate its energy needs. The use of Vacuum Insulated Panels as an option for energy refurbishment is compared to traditional insulation materials (XPS, Mineral Wool). For each scenario, energy consumptions are calculated and, in combination with their expected capital costs, used to perform a financial feasibility analysis. A holistic approach is proposed, taking into account the impact of the intervention on internal space by quantifying the value of the lost usable space and used in the financial feasibility analysis. The proposed approach highlights how taking into account different drivers will lead to the choice of different insulation materials, showing how accounting for the economic value of space can make VIPs an attractive solution for energy retrofitting under various climate conditions.

Keywords: Vacuum insulated panels, building performance simulation, payback period, building energy retrofit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 472
191 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis

Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus

Abstract:

Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.

Keywords: Additive Manufacturing, Internal topologies, Porosity, Rapid Prototyping, Selective Laser Melting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2326
190 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
189 Adaptive Design of Large Prefabricated Concrete Panels Collective Housing

Authors: Daniel M. Muntean, Viorel Ungureanu

Abstract:

More than half of the urban population in Romania lives today in residential buildings made out of large prefabricated reinforced concrete panels. Since their initial design was made in the 1960’s, these housing units are now being technically and morally outdated, consuming large amounts of energy for heating, cooling, ventilation and lighting, while failing to meet the needs of the contemporary life-style. Due to their widespread use, the design of a system that improves their energy efficiency would have a real impact, not only on the energy consumption of the residential sector, but also on the quality of life that it offers. Furthermore, with the transition of today’s existing power grid to a “smart grid”, buildings could become an active element for future electricity networks by contributing in micro-generation and energy storage. One of the most addressed issues today is to find locally adapted strategies that can be applied considering the 20-20-20 EU policy criteria and to offer sustainable and innovative solutions for the cost-optimal energy performance of buildings adapted on the existing local market. This paper presents a possible adaptive design scenario towards sustainable retrofitting of these housing units. The apartments are transformed in order to meet the current living requirements and additional extensions are placed on top of the building, replacing the unused roof space, acting not only as housing units, but as active solar energy collection systems. An adaptive building envelope is ensured in order to achieve overall air-tightness and an elevator system is introduced to facilitate access to the upper levels.

Keywords: Adaptive building, energy efficiency, retrofitting, residential buildings, smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
188 Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Authors: Adrian Lo

Abstract:

Since the advent of modern architecture, notions of free plan and transparency have proliferated well into current trends. The movement’s notion of a spatially homogeneous, open and limitless ‘free plan’ contrasts with the spatially heterogeneous ‘series of rooms’ defined by load bearing walls, which in turn triggered new notions of transparency created by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as exhibit both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambiguity and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics, but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Keywords: interpenetration, literal and phenomenal transparency, spatial heterogeneity, visual homogeneity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
187 Evaluation of Linear and Geometrically Nonlinear Static and Dynamic Analysis of Thin Shells by Flat Shell Finite Elements

Authors: Djamel Boutagouga, Kamel Djeghaba

Abstract:

The choice of finite element to use in order to predict nonlinear static or dynamic response of complex structures becomes an important factor. Then, the main goal of this research work is to focus a study on the effect of the in-plane rotational degrees of freedom in linear and geometrically non linear static and dynamic analysis of thin shell structures by flat shell finite elements. In this purpose: First, simple triangular and quadrilateral flat shell finite elements are implemented in an incremental formulation based on the updated lagrangian corotational description for geometrically nonlinear analysis. The triangular element is a combination of DKT and CST elements, while the quadrilateral is a combination of DKQ and the bilinear quadrilateral membrane element. In both elements, the sixth degree of freedom is handled via introducing fictitious stiffness. Secondly, in the same code, the sixth degrees of freedom in these elements is handled differently where the in-plane rotational d.o.f is considered as an effective d.o.f in the in-plane filed interpolation. Our goal is to compare resulting shell elements. Third, the analysis is enlarged to dynamic linear analysis by direct integration using Newmark-s implicit method. Finally, the linear dynamic analysis is extended to geometrically nonlinear dynamic analysis where Newmark-s method is used to integrate equations of motion and the Newton-Raphson method is employed for iterating within each time step increment until equilibrium is achieved. The obtained results demonstrate the effectiveness and robustness of the interpolation of the in-plane rotational d.o.f. and present deficiencies of using fictitious stiffness in dynamic linear and nonlinear analysis.

Keywords: Flat shell, dynamic analysis, nonlinear, Newmark, drilling rotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2881
186 An Efficient Approach for Shear Behavior Definition of Plant Stalk

Authors: M. R. Kamandar, J. Massah

Abstract:

The information of the impact cutting behavior of plants stalk plays an important role in the design and fabrication of plants cutting equipment. It is difficult to investigate a theoretical method for defining cutting properties of plants stalks because the cutting process is complex. Thus, it is necessary to set up an experimental approach to determine cutting parameters for a single stalk. To measure the shear force, shear energy and shear strength of plant stalk, a special impact cutting tester was fabricated. It was similar to an Izod impact cutting tester for metals but a cutting blade and data acquisition system were attached to the end of pendulum's arm. The apparatus was included four strain gages and a digital indicator to show the real-time cutting force of plant stalk. To measure the shear force and also testing the apparatus, two plants’ stalks, like buxus and privet, were selected. The samples (buxus and privet stalks) were cut under impact cutting process at four loading rates 1, 2, 3 and 4 m.s-1 and three internodes fifth, tenth and fifteenth by the apparatus. At buxus cutting analysis: the minimum value of cutting energy was obtained at fifth internode and loading rate 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate 1 m.s-1. At privet cutting analysis: the minimum value of shear consumption energy was obtained at fifth internode and loading rate: 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate: 1 m.s-1. The statistical analysis at both plants showed that the increase of impact cutting speed would decrease the shear consumption energy and shear strength. In two scenarios, the results showed that with increase the cutting speed, shear force would decrease.

Keywords: Buxus, privet, impact cutting, shear energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
185 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: Embedded code generation, embedded C code quality, embedded systems, model-based development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 956
184 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated stormwater modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for subcatchments were simulated and the runoff from the subcatchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5004
183 Pollution and Water Quality of the Beshar River

Authors: Fardin Boustani , Mohammah Hosein Hojati

Abstract:

The Beshar River is one aquatic ecosystem,which is affected by pollutants. This study was conducted to evaluate the effects of human activities on the water quality of the Beshar river. This river is approximately 190 km in length and situated at the geographical positions of 51° 20' to 51° 48' E and 30° 18' to 30° 52' N it is one of the most important aquatic ecosystems of Kohkiloye and Boyerahmad province next to the city of Yasuj in southern Iran. The Beshar river has been contaminated by industrial, agricultural and other activities in this region such as factories, hospitals, agricultural farms, urban surface runoff and effluent of wastewater treatment plants. In order to evaluate the effects of these pollutants on the quality of the Beshar river, five monitoring stations were selected along its course. The first station is located upstream of Yasuj near the Dehnow village; stations 2 to 4 are located east, south and west of city; and the 5th station is located downstream of Yasuj. Several water quality parameters were sampled. These include pH, dissolved oxygen, biological oxygen demand (BOD), temperature, conductivity, turbidity, total dissolved solids and discharge or flow measurements. Water samples from the five stations were collected and analysed to determine the following physicochemical parameters: EC, pH, T.D.S, T.H, No2, DO, BOD5, COD during 2008 to 2009. The study shows that the BOD5 value of station 1 is at a minimum (1.5 ppm) and increases downstream from stations 2 to 4 to a maximum (7.2 ppm), and then decreases at station 5. The DO values of station 1 is a maximum (9.55 ppm), decreases downstream to stations 2 - 4 which are at a minimum (3.4 ppm), before increasing at station 5. The amount of BOD and TDS are highest at the 4th station and the amount of DO is lowest at this station, marking the 4th station as more highly polluted than the other stations. The physicochemical parameters improve at the 5th station due to pollutant degradation and dilution. Finally the point and nonpoint pollutant sources of Beshar river were determined and compared to the monitoring results.

Keywords: Beshar river, physicochemical parameter, waterpollution, Yasuj

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
182 Composite Coatings of Piezoelectric Quartz Sensors Based on Viscous Sorbents and Casein Micelles

Authors: Anastasiia Shuba, Tatiana Kuchmenko, Umarkhanov Ruslan, Bogdanova Ekaterina

Abstract:

The development of new sensitive coatings for sensors is one of the key directions in the development of sensor technologies. Recently, there has been a trend towards the creation of multicomponent coatings for sensors, which make it possible to increase the sensitivity, and specificity, and improve the performance properties of sensors. When analyzing samples with a complex matrix of biological origin, the inclusion of micelles of bioactive substances (amino and nucleic acids, peptides, proteins) in the composition of the sensor coating can also increase useful analytical information. The purpose of this work is to evaluate the analytical characteristics of composite coatings of piezoelectric quartz sensors based on medium-molecular viscous sorbents with incorporated micellar casein concentrate during the sorption of vapors of volatile organic compounds. The sorption properties of the coatings were studied by piezoelectric quartz microbalance. Macromolecular compounds (dicyclohexyl-18-crown-6, triton X-100, lanolin, micellar casein concentrate) were used as sorbents. Highly volatile organic compounds of various classes (alcohols, acids, aldehydes, esters) and water were selected as test substances. It has been established that composite coatings of sensors with the inclusion of micellar casein are more stable and selective to vapors of highly volatile compounds than to water vapors. The method and technique of forming a composite coating using molecular viscous sorbents does not affect the kinetic features of VOC sorption. When casein micelles are used, the features of kinetic sorption depend on the matrix of the coating.

Keywords: Composite coating, piezoelectric quartz microbalance, sensor, volatile organic compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92
181 Genetic Algorithm Application in a Dynamic PCB Assembly with Carryover Sequence- Dependent Setups

Authors: M. T. Yazdani Sabouni, Rasaratnam Logendran

Abstract:

We consider a typical problem in the assembly of printed circuit boards (PCBs) in a two-machine flow shop system to simultaneously minimize the weighted sum of weighted tardiness and weighted flow time. The investigated problem is a group scheduling problem in which PCBs are assembled in groups and the interest is to find the best sequence of groups as well as the boards within each group to minimize the objective function value. The type of setup operation between any two board groups is characterized as carryover sequence-dependent setup time, which exactly matches with the real application of this problem. As a technical constraint, all of the boards must be kitted before the assembly operation starts (kitting operation) and by kitting staff. The main idea developed in this paper is to completely eliminate the role of kitting staff by assigning the task of kitting to the machine operator during the time he is idle which is referred to as integration of internal (machine) and external (kitting) setup times. Performing the kitting operation, which is a preparation process of the next set of boards while the other boards are currently being assembled, results in the boards to continuously enter the system or have dynamic arrival times. Consequently, a dynamic PCB assembly system is introduced for the first time in the assembly of PCBs, which also has characteristics similar to that of just-in-time manufacturing. The problem investigated is computationally very complex, meaning that finding the optimal solutions especially when the problem size gets larger is impossible. Thus, a heuristic based on Genetic Algorithm (GA) is employed. An example problem on the application of the GA developed is demonstrated and also numerical results of applying the GA on solving several instances are provided.

Keywords: Genetic algorithm, Dynamic PCB assembly, Carryover sequence-dependent setup times, Multi-objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
180 Social Movements and the Diffusion of Tactics and Repertoires: Activists' Network in Anti-globalism Movement

Authors: Kyoko Tominaga

Abstract:

Non-Government Organizations (NGOs), Non-Profit Organizations (NPOs), Social Enterprises and other actors play an important role in political decisions in governments at the international levels. Especially, such organizations’ and activists’ network in civil society is quite important to effect to the global politics. To solve the complex social problems in global era, diverse actors should corporate each other. Moreover, network of protesters is also contributes to diffuse tactics, information and other resources of social movements.

Based on the findings from the study of International Trade Fairs (ITFs), the author analyzes the network of activists in anti-globalism movement. This research focuses the transition of 54 activists’ whole network in the “protest event” against 2008 G8 summit in Japan. Their network is examined at the three periods: Before protest event phase, during protest event phase and after event phase. A mixed method is used in this study: the author shows the hypothesis from social network analysis and evaluates that with interview data analysis. This analysis gives the two results. Firstly, the more protesters participate to the various events during the protest event, the more they build the network. After that, active protesters keep their network as well. From interview data, we can understand that the active protesters can build their network and diffuse the information because they communicate with other participants and understand that diverse issues are related. This paper comes to same conclusion with previous researches: protest events activate the network among the political activists. However, some participants succeed to build their network, others do not. “Networked” activists are participated in the various events for short period of time and encourage the diffusion of information and tactics of social movements.

Keywords: Social Movement, Global Justice Movement, Tactics, Diffusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
179 Minimization of Non-Productive Time during 2.5D Milling

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

In the modern manufacturing systems, the use of thermal cutting techniques using oxyfuel, plasma and laser have become indispensable for the shape forming of high quality complex components; however, the conventional chip removal production techniques still have its widespread space in the manufacturing industry. Both these types of machining operations require the positioning of end effector tool at the edge where the cutting process commences. This repositioning of the cutting tool in every machining operation is repeated several times and is termed as non-productive time or airtime motion. Minimization of this non-productive machining time plays an important role in mass production with high speed machining. As, the tool moves from one region to the other by rapid movement and visits a meticulous region once in the whole operation, hence the non-productive time can be minimized by synchronizing the tool movements. In this work, this problem is being formulated as a general travelling salesman problem (TSP) and a genetic algorithm approach has been applied to solve the same. For improving the efficiency of the algorithm, the GA has been hybridized with a noble special heuristic and simulating annealing (SA). In the present work a novel heuristic in the combination of GA has been developed for synchronization of toolpath movements during repositioning of the tool. A comparative analysis of new Meta heuristic techniques with simple genetic algorithm has been performed. The proposed metaheuristic approach shows better performance than simple genetic algorithm for minimization of nonproductive toolpath length. Also, the results obtained with the help of hybrid simulated annealing genetic algorithm (HSAGA) are also found better than the results using simple genetic algorithm only.

Keywords: Non-productive time, Airtime, 2.5 D milling, Laser cutting, Metaheuristic, Genetic Algorithm, Simulated Annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2698
178 Emotions Triggered by Children’s Literature Images

Authors: A. Breda, C. Cruz

Abstract:

The role of images/illustrations in communicating meanings and triggering emotions assumes an increasingly relevant role in contemporary texts, regardless of the age group for which they are intended or the nature of the texts that host them. It is no coincidence that children's books are full of illustrations and that the image/text ratio decreases as the age group grows. The vast majority of children's books can be considered as multimodal texts containing text and images/illustrations, interacting with each other, to provide the young reader with a broader and more creative understanding of the book's narrative. This interaction is very diverse, ranging from images/illustrations that are not essential for understanding the storytelling to those that contribute significantly to the meaning of the story. Usually, these books are also read by adults, namely by parents, educators, and teachers who act as mediators between the book and the children, explaining aspects that are or seem to be too complex for the child's context. It should be noted that there are books labeled as children's books, that are clearly intended for both children and adults. In this work, following a qualitative and interpretative methodology based on written productions, participant observation, and field notes, we will describe the perceptions of future teachers of the 1st cycle of basic education, attending a master’s degree at a Portuguese university, about the role of the image in literary and non-literary texts, namely in mathematical texts, and how these can constitute precious resources for emotional regulation and for the design of creative didactic situations. The analysis of the collected data allowed us to obtain evidence regarding the evolution of the participants' perception regarding the crucial role of images in children's literature, not only as an emotional regulator for young readers but also as a creative source for the design of meaningful didactical situations, crossing other scientific areas, other than the mother tongue, namely mathematics.

Keywords: Children’s literature, emotions, multimodal texts, soft skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 133
177 Trend Analysis of Annual Total Precipitation Data in Konya

Authors: Naci Büyükkaracığan

Abstract:

Hydroclimatic observation values ​​are used in the planning of the project of water resources. Climate variables are the first of the values ​​used in planning projects. At the same time, the climate system is a complex and interactive system involving the atmosphere, land surfaces, snow and bubbles, the oceans and other water structures. The amount and distribution of precipitation, which is an important climate parameter, is a limiting environmental factor for dispersed living things. Trend analysis is applied to the detection of the presence of a pattern or trend in the data set. Many trends work in different parts of the world are usually made for the determination of climate change. The detection and attribution of past trends and variability in climatic variables is essential for explaining potential future alteration resulting from anthropogenic activities. Parametric and non-parametric tests are used for determining the trends in climatic variables. In this study, trend tests were applied to annual total precipitation data obtained in period of 1972 and 2012, in the Konya Basin. Non-parametric trend tests, (Sen’s T, Spearman’s Rho, Mann-Kendal, Sen’s T trend, Wald-Wolfowitz) and parametric test (mean square) were applied to annual total precipitations of 15 stations for trend analysis. The linear slopes (change per unit time) of trends are calculated by using a non-parametric estimator developed by Sen. The beginning of trends is determined by using the Mann-Kendall rank correlation test. In addition, homogeneities in precipitation trends are tested by using a method developed by Van Belle and Hughes. As a result of tests, negative linear slopes were found in annual total precipitations in Konya.

Keywords: Trend analysis, precipitation, hydroclimatology, Konya, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
176 A Development of Home Service Robot using Omni-Wheeled Mobility and Task-Based Manipulation

Authors: Hijun Kim, Jungkeun Sung, Seungwoo Kim

Abstract:

In this paper, a Smart Home Service Robot, McBot II, which performs mess-cleanup function etc. in house, is designed much more optimally than other service robots. It is newly developed in much more practical system than McBot I which we had developed two years ago. One characteristic attribute of mobile platforms equipped with a set of dependent wheels is their omni- directionality and the ability to realize complex translational and rotational trajectories for agile navigation in door. An accurate coordination of steering angle and spinning rate of each wheel is necessary for a consistent motion. This paper develops trajectory controller of 3-wheels omni-directional mobile robot using fuzzy azimuth estimator. A specialized anthropomorphic robot manipulator which can be attached to the housemaid robot McBot II, is developed in this paper. This built-in type manipulator consists of both arms with 3 DOF (Degree of Freedom) each and both hands with 3 DOF each. The robotic arm is optimally designed to satisfy both the minimum mechanical size and the maximum workspace. Minimum mass and length are required for the built-in cooperated-arms system. But that makes the workspace so small. This paper proposes optimal design method to overcome the problem by using neck joint to move the arms horizontally forward/backward and waist joint to move them vertically up/down. The robotic hand, which has two fingers and a thumb, is also optimally designed in task-based concept. Finally, the good performance of the developed McBot II is confirmed through live tests of the mess-cleanup task.

Keywords: Holonomic Omni-wheeled Mobile Robot, Special-purpose, Manipulation, Home Service Robot

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
175 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
174 Selective Encryption using ISMA Cryp in Real Time Video Streaming of H.264/AVC for DVB-H Application

Authors: Jay M. Joshi, Upena D. Dalal

Abstract:

Multimedia information availability has increased dramatically with the advent of video broadcasting on handheld devices. But with this availability comes problems of maintaining the security of information that is displayed in public. ISMA Encryption and Authentication (ISMACryp) is one of the chosen technologies for service protection in DVB-H (Digital Video Broadcasting- Handheld), the TV system for portable handheld devices. The ISMACryp is encoded with H.264/AVC (advanced video coding), while leaving all structural data as it is. Two modes of ISMACryp are available; the CTR mode (Counter type) and CBC mode (Cipher Block Chaining) mode. Both modes of ISMACryp are based on 128- bit AES algorithm. AES algorithms are more complex and require larger time for execution which is not suitable for real time application like live TV. The proposed system aims to gain a deep understanding of video data security on multimedia technologies and to provide security for real time video applications using selective encryption for H.264/AVC. Five level of security proposed in this paper based on the content of NAL unit in Baseline Constrain profile of H.264/AVC. The selective encryption in different levels provides encryption of intra-prediction mode, residue data, inter-prediction mode or motion vectors only. Experimental results shown in this paper described that fifth level which is ISMACryp provide higher level of security with more encryption time and the one level provide lower level of security by encrypting only motion vectors with lower execution time without compromise on compression and quality of visual content. This encryption scheme with compression process with low cost, and keeps the file format unchanged with some direct operations supported. Simulation was being carried out in Matlab.

Keywords: AES-128, CAVLC, H.264, ISMACryp

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
173 The Loess Regression Relationship Between Age and BMI for both Sydney World Masters Games Athletes and the Australian National Population

Authors: Joe Walsh, Mike Climstein, Ian Timothy Heazlewood, Stephen Burke, Jyrki Kettunen, Kent Adams, Mark DeBeliso

Abstract:

Thousands of masters athletes participate quadrennially in the World Masters Games (WMG), yet this cohort of athletes remains proportionately under-investigated. Due to a growing global obesity pandemic in context of benefits of physical activity across the lifespan, the BMI trends for this unique population was of particular interest. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approach is necessary in order to counteract the obesity pandemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship.BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual sub-groups.This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages.

Keywords: Aging, masters athlete, Quetelet Index, sport

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
172 Using Artificial Neural Network and Leudeking-Piret Model in the Kinetic Modeling of Microbial Production of Poly-β- Hydroxybutyrate

Authors: A.Qaderi, A. Heydarinasab, M. Ardjmand

Abstract:

Poly-β-hydroxybutyrate (PHB) is one of the most famous biopolymers that has various applications in production of biodegradable carriers. The most important strategy for enhancing efficiency in production process and reducing the price of PHB, is the accurate expression of kinetic model of products formation and parameters that are effective on it, such as Dry Cell Weight (DCW) and substrate consumption. Considering the high capabilities of artificial neural networks in modeling and simulation of non-linear systems such as biological and chemical industries that mainly are multivariable systems, kinetic modeling of microbial production of PHB that is a complex and non-linear biological process, the three layers perceptron neural network model was used in this study. Artificial neural network educates itself and finds the hidden laws behind the data with mapping based on experimental data, of dry cell weight, substrate concentration as input and PHB concentration as output. For training the network, a series of experimental data for PHB production from Hydrogenophaga Pseudoflava by glucose carbon source was used. After training the network, two other experimental data sets that have not intervened in the network education, including dry cell concentration and substrate concentration were applied as inputs to the network, and PHB concentration was predicted by the network. Comparison of predicted data by network and experimental data, indicated a high precision predicted for both fructose and whey carbon sources. Also in present study for better understanding of the ability of neural network in modeling of biological processes, microbial production kinetic of PHB by Leudeking-Piret experimental equation was modeled. The Observed result indicated an accurate prediction of PHB concentration by artificial neural network higher than Leudeking- Piret model.

Keywords: Kinetic Modeling, Poly-β-Hydroxybutyrate (PHB), Hydrogenophaga Pseudoflava, Artificial Neural Network, Leudeking-Piret

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4779