Search results for: methodology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1414

Search results for: methodology

1114 Application of Statistical Approach for Optimizing CMCase Production by Bacillus tequilensis S28 Strain via Submerged Fermentation Using Wheat Bran as Carbon Source

Authors: A. Sharma, R. Tewari, S. K. Soni

Abstract:

Biofuels production has come forth as a future technology to combat the problem of depleting fossil fuels. Bio-based ethanol production from enzymatic lignocellulosic biomass degradation serves an efficient method and catching the eye of scientific community. High cost of the enzyme is the major obstacle in preventing the commercialization of this process. Thus main objective of the present study was to optimize composition of medium components for enhancing cellulase production by newly isolated strain of Bacillus tequilensis. Nineteen factors were taken into account using statistical Plackett-Burman Design. The significant variables influencing the cellulose production were further employed in statistical Response Surface Methodology using Central Composite Design for maximizing cellulase production. The optimum medium composition for cellulase production was: peptone (4.94 g/L), ammonium chloride (4.99 g/L), yeast extract (2.00 g/L), Tween-20 (0.53 g/L), calcium chloride (0.20 g/L) and cobalt chloride (0.60 g/L) with pH 7, agitation speed 150 rpm and 72 h incubation at 37oC. Analysis of variance (ANOVA) revealed high coefficient of determination (R2) of 0.99. Maximum cellulase productivity of 11.5 IU/ml was observed against the model predicted value of 13 IU/ml. This was found to be optimally active at 60oC and pH 5.5.

Keywords: Bacillus tequilensis, CMCase, Submerged Fermentation, Optimization, Plackett-Burman Design, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3018
1113 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
1112 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios

Authors: Revoti Prasad Bora, Nikita Katyal

Abstract:

Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.

Keywords: Halo, cannibalization, promotion, baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
1111 Efficiency Based Model for Solar Urban Planning

Authors: Amado, M. P., Amado, A., Poggi, F., Correia de Freitas, J.

Abstract:

Today is widely understood that global energy consumption patterns are directly related to the urban expansion and development process. This expansion is based on the natural growth of human activities and has left most urban areas totally dependent on fossil fuel derived external energy inputs. This status-quo of production, transportation, storage and consumption of energy has become inefficient and is set to become even more so when the continuous increases in energy demand are factored in. The territorial management of land use and related activities is a central component in the search for more efficient models of energy use, models that can meet current and future regional, national and European goals.

In this paper a methodology is developed and discussed with the aim of improving energy efficiency at the municipal level. The development of this methodology is based on the monitoring of energy consumption and its use patterns resulting from the natural dynamism of human activities in the territory and can be utilized to assess sustainability at the local scale. A set of parameters and indicators are defined with the objective of constructing a systemic model based on the optimization, adaptation and innovation of the current energy framework and the associated energy consumption patterns. The use of the model will enable local governments to strike the necessary balance between human activities and economic development and the local and global environment while safeguarding fairness in the energy sector.

Keywords: Solar urban planning, solar smart city, urban development, energy efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
1110 Analysis of a Lignocellulose Degrading Microbial Consortium to Enhance the Anaerobic Digestion of Rice Straws

Authors: Supanun Kangrang, Kraipat Cheenkachorn, Kittiphong Rattanaporn, Malinee Sriariyanun

Abstract:

Rice straw is lignocellulosic biomass which can be utilized as substrate for the biogas production. However, due to the property and composition of rice straw, it is difficult to be degraded by hydrolysis enzymes. One of the pretreatment methods that modify such properties of lignocellulosic biomass is the application of lignocellulose-degrading microbial consortia. The aim of this study is to investigate the effect of microbial consortia to enhance biogas production. To select the high efficient consortium, cellulase enzymes were extracted and their activities were analyzed. The results suggested that microbial consortium culture obtained from cattle manure is the best candidate compared to decomposed wood and horse manure. A microbial consortium isolated from cattle manure was then mixed with anaerobic sludge and used as inoculum for biogas production. The optimal conditions for biogas production were investigated using response surface methodology (RSM). The tested parameters were the ratio of amount of microbial consortium isolated and amount of anaerobic sludge (MI:AS), substrate to inoculum ratio (S:I) and temperature. Here, the value of the regression coefficient R2 = 0.7661 could be explained by the model which is high to advocate the significance of the model. The highest cumulative biogas yield was 104.6 ml/g-rice straw at optimum ratio of MI:AS, ratio of S:I, and temperature of 2.5:1, 15:1 and 44°C respectively.

Keywords: Lignocellulolytic biomass, microbial consortium, cellulase, biogas, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3257
1109 Statistical Screening of Medium Components on Ethanol Production from Cashew Apple Juice using Saccharomyces diasticus

Authors: Karuppaiya Maruthai, Viruthagiri Thangavelu, Manikandan Kanagasabai

Abstract:

In the present study, effect of critical medium components (a total of fifteen components) on ethanol production from waste cashew apple juice (CAJ) using yeast Saccharomyces diasticus was studied. A statistical response surface methodology (RSM) based Plackett-Burman Design (PBD) was used for the design of experiments. The design contains a total of 32 experimental trails. The effect of medium components on ethanol was studied at two different levels such as low concentration level (-) and high concentration levels (+). The dependent variables selected in this study were ethanol concentration (g/L) and cellmass concentration (g/L). Data obtained from RSM on ethanol production were subjected to analysis of variance (ANOVA). In general, initial substrate concentration significantly influenced the microbial growth and product formation. Of the medium components evaluated, CAJ concentration, yeast extract, (NH4)2SO4, and malt extract showed significant effect on ethanol fermentation. A second-order polynomial model was used to predict the experimental data and the model fitted the data with a high correlation coefficient (R2 > 0.98). Maximum ethanol (15.3 g/L) and biomass (6.4 g/L) concentrations were obtained at the optimum medium composition and at optimum condition (temperature-30°C; initial pH-6.8) after 72 h fermentation using S.diasticus.

Keywords: cashew apple juice, ethanol, fermentation, yeast, response surface methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
1108 Transformation of Vocal Characteristics: A Review of Literature

Authors: Dong-Yan Huang, Ee Ping Ong, Susanto Rahardja, Minghui Dong, Haizhou Li

Abstract:

The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future.

Keywords: Voice transformation, Voice Quality, Emotion, Individuality, Speaking Style, Speech Production, Speech Perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
1107 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap) and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it is possible to resolve piping vibration problems by adding new viscous dampers supports to the system. The methodology applied on this paper can be used to resolve similar vibration issues.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229
1106 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: Autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
1105 A Methodology for the Synthesis of Multi-Processors

Authors: Hamid Yasinian

Abstract:

Random epistemologies and hash tables have garnered minimal interest from both security experts and experts in the last several years. In fact, few information theorists would disagree with the evaluation of expert systems. In our research, we discover how flip-flop gates can be applied to the study of superpages. Though such a hypothesis at first glance seems perverse, it is derived from known results.

Keywords: Synthesis, Multi-Processors, Interactive Model, Moor’s Law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
1104 Expert Based System Design for Integrated Waste Management

Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy

Abstract:

Recently, an increasing number of researchers have been focusing on working out realistic solutions to sustainability problems. As sustainability issues gain higher importance for organisations, the management of such decisions becomes critical. Knowledge representation is a fundamental issue of complex knowledge based systems. Many types of sustainability problems would benefit from models based on experts’ knowledge. Cognitive maps have been used for analyzing and aiding decision making. A cognitive map can be made of almost any system or problem. A fuzzy cognitive map (FCM) can successfully represent knowledge and human experience, introducing concepts to represent the essential elements and the cause and effect relationships among the concepts to model the behaviour of any system. Integrated waste management systems (IWMS) are complex systems that can be decomposed to non-related and related subsystems and elements, where many factors have to be taken into consideration that may be complementary, contradictory, and competitive; these factors influence each other and determine the overall decision process of the system. The goal of the present paper is to construct an efficient IWMS which considers various factors. The authors’ intention is to propose an expert based system design approach for implementing expert decision support in the area of IWMSs and introduces an appropriate methodology for the development and analysis of group FCM. A framework for such a methodology consisting of the development and application phases is presented.

Keywords: Factors, fuzzy cognitive map, group decision, integrated waste management system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
1103 The Impact of Government Expenditure on Economic Growth: A Study of Asian Countries

Authors: K. P. K. S. Lahirushan, W. G. V. Gunasekara

Abstract:

Main purpose of this study is to identify the impact of government expenditure on economic growth in Asian Countries. Consequently, main objective is to analyze whether government expenditure causes economic growth in Asian countries vice versa and then scrutinizing long-run equilibrium relationship exists between them. The study completely based on secondary data. The methodology being quantitative that includes econometrical techniques of cointegration, panel fixed effects model and granger causality in the context of panel data of Asian countries; Singapore, Malaysia, Thailand, South Korea, Japan, China, Sri Lanka, India and Bhutan with 44 observations in each country, totaling to 396 observations from 1970 to 2013. The model used is the random effects panel OLS model. As with the above methodology, the study found the fascinating outcome. At first, empirical findings exhibit a momentous positive impact of government expenditure on Gross Domestic Production in Asian region. Secondly, government expenditure and economic growth indicate a long-run relationship in Asian countries. In conclusion, there is a unidirectional causality from economic growth to government expenditure and government expenditure to economic growth in Asian countries. Hence the study is validated that it is in line with the Keynesian theory and Wagner’s law as well. Consequently, it can be concluded that role of government would play a vital role in economic growth of Asian Countries. However; if government expenditure did not figure out with the economy’s needs it might be considerably inspiration the economy in a negative way so that society bears the costs.

Keywords: Asian Countries, Government Expenditure, Keynesian theory, Wagner’s theory, Random effects panel OLS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8938
1102 Qualification and Provisioning of xDSL Broadband Lines using a GIS Approach

Authors: Mavroidis Athanasios, Karamitsos Ioannis, Saletti Paola

Abstract:

In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.

Keywords: GIS loop qualification, GIS xDSL, LLU TIN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
1101 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies

Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan

Abstract:

In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.

Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
1100 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: Analytics, digitization, industry 4.0, manufacturing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
1099 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis

Authors: Mussa Mahmoudi

Abstract:

For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).

Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
1098 Engineering of E-Learning Content Creation: Case Study for African Countries

Authors: María-Dolores Afonso-Suárez, Nayra Pumar-Carreras, Juan Ruiz-Alzola

Abstract:

This research addresses the use of an e-Learning creation methodology for learning objects. Throughout the process, indicators are being gathered, to determine if it responds to the main objectives of an engineering discipline. These parameters will also indicate if it is necessary to review the creation cycle and readjust any phase. Within the project developed for this study, apart from the use of structured methods, there has been a central objective: the establishment of a learning atmosphere. A place where all the professionals involved are able to collaborate, plan, solve problems and determine guides to follow in order to develop creative and innovative solutions. It has been outlined as a blended learning program with an assessment plan that proposes face to face lessons, coaching, collaboration, multimedia and web based learning objects as well as support resources. The project has been drawn as a long term task, the pilot teaching actions designed provide the preliminary results object of study. This methodology is been used in the creation of learning content for the African countries of Senegal, Mauritania and Cape Verde. It has been developed within the framework of the MACbioIDi, an Interreg European project for the International cooperation and development. The educational area of this project is focused in the training and advice of professionals of the medicine as well as engineers in the use of applications of medical imaging technology, specifically the 3DSlicer application and the Open Anatomy Browser.

Keywords: Teaching contents engineering, e-learning, blended learning, international cooperation, 3DSlicer, open anatomy browser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998
1097 The MUST ADS Concept

Authors: J-B. Clavel, N. Thiollière, B. Mouginot

Abstract:

The presented work is motivated by a French law regarding nuclear waste management. A new conceptual Accelerator Driven System (ADS) designed for the Minor Actinides (MA) transmutation has been assessed by numerical simulation. The MUltiple Spallation Target (MUST) ADS combines high thermal power (up to 1.4 GWth) and high specific power. A 30 mA and 1 GeV proton beam is divided into three secondary beams transmitted on three liquid lead-bismuth spallation targets. Neutron and thermalhydraulic simulations have been performed with the code MURE, based on the Monte-Carlo transport code MCNPX. A methodology has been developed to define characteristic of the MUST ADS concept according to a specific transmutation scenario. The reference scenario is based on a MA flux (neptunium, americium and curium) providing from European Fast Reactor (EPR) and a plutonium multireprocessing strategy is accounted for. The MUST ADS reference concept is a sodium cooled fast reactor. The MA fuel at equilibrium is mixed with MgO inert matrix to limit the core reactivity and improve the fuel thermal conductivity. The fuel is irradiated over five years. Five years of cooling and two years for the fuel fabrication are taken into account. The MUST ADS reference concept burns about 50% of the initial MA inventory during a complete cycle. In term of mass, up to 570 kg/year are transmuted in one concept. The methodology to design the MUST ADS and to calculate fuel composition at equilibrium is precisely described in the paper. A detailed fuel evolution analysis is performed and the reference scenario is compared to a scenario where only americium transmutation is performed.

Keywords: Accelerator Driven System, double strata scenario, minor actinides, MUST, transmutation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
1096 Strategies for Developing e-LMS for Tanzania Secondary Schools

Authors: Ellen A. Kalinga, R. B. Bagile Burchard, Lena Trojer

Abstract:

Tanzania secondary schools in rural areas are geographically and socially isolated, hence face a number of problems in getting learning materials resulting in poor performance in National examinations. E-learning as defined to be the use of information and communication technology (ICT) for supporting the educational processes has motivated Tanzania to apply ICT in its education system. There has been effort to improve secondary school education using ICT through several projects. ICT for e-learning to Tanzania rural secondary school is one of the research projects conceived by the University of Dar-es-Salaam through its College of Engineering and Technology. The main objective of the project is to develop a tool to enable ICT support rural secondary school. The project is comprehensive with a number of components, one being development of e-learning management system (e-LMS) for Tanzania secondary schools. This paper presents strategies of developing e-LMS. It shows the importance of integrating action research methodology with the modeling methods as presented by model driven architecture (MDA) and the usefulness of Unified Modeling Language (UML) on the issue of modeling. The benefit of MDA will go along with the development based on software development life cycle (SDLC) process, from analysis and requirement phase through design and implementation stages as employed by object oriented system analysis and design approach. The paper also explains the employment of open source code reuse from open source learning platforms for the context sensitive development of the e-LMS for Tanzania secondary schools.

Keywords: Action Research Methodology, OOSA&D, MDA, UML, Open Source LMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
1095 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: Bond ball mill, population balance model, product size distribution, vertical stirred mill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1113
1094 Multi-Criteria Nautical Ports Capacity and Services Planning

Authors: N. Perko, N. Kavran, M. Bukljaš, I. Berbić

Abstract:

This paper is a result of implemented research on proposed introduced methodology for nautical ports capacity planning by introducing a multi-criteria approach of defined criteria in the Adriatic Sea region. The purpose was analyzing the determinants - characteristics of infrastructure and services of nautical ports capacity allocated, especially nowadays due to COVID-19 pandemic, as crucial for successful operation of nautical ports. Giving the importance of the defined priorities for short-term and long-term planning is essential not only in terms of the development of nautical tourism, but also in terms of developing the maritime system, but unfortunately this is not always carried out. Evaluation of the use of resources should follow from a detailed analysis of all aspects of resources bearing in mind that nautical tourism used resources in a sustainable manner and generates effects in the tourism and maritime sectors. Consequently, identified multiplier effect of nautical tourism, which should be defined and quantified in detail, should be one of the major competitive products on the Croatian Adriatic and the Mediterranean. Research of nautical tourism is necessary to quantify the effects and required planning system development. In the future, the greatest threat to long-term sustainable development of nautical tourism can be its further uncontrolled or unlimited and undirected development, especially under pressure markedly higher demand than supply for new moorings in the Mediterranean. Results of this implemented research are applicable to nautical ports management and decision makers of maritime transport system development. This paper will present implemented research and obtained result - developed methodology for nautical port capacity planning - Port Capacity Planning Multi-criteria decision-making. A proposed methodological approach of multi-criteria capacity planning includes four criteria (spatial - transport, cost - infrastructure, ecological and organizational criteria, and additional services). The importance of the criteria and sub-criteria is evaluated and carried out the basis for a sensitivity analysis of the importance of the criteria and sub-criteria. Based on the analysis of the identified and quantified importance of certain criteria and sub-criteria as well as sensitivity analysis and analysis of changes of the quantified importance scientific and applicable results will be presented. These obtained results have practical applicability by management of nautical ports in the planning of increasing capacity and further development and for the adaptation of existing nautical ports. The obtained research is applicable and replicable in other seas and results are especially important and useful in this COVID-19 pandemic challenging maritime development framework.

Keywords: Adriatic Sea, capacity, infrastructures, maritime system, methodology, nautical ports, nautical tourism, service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 380
1093 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: Battery Energy Storage System, electrical network frequency stability, frequency control unit, PowerFactory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
1092 A β-mannanase from Fusarium oxysporum SS-25 via Solid State Fermentation on Brewer’s Spent Grain: Medium Optimization by Statistical Tools, Kinetic Characterization and Its Applications

Authors: S. S. Rana, C. Janveja, S. K. Soni

Abstract:

This study is concerned with the optimization of fermentation parameters for the hyper production of mannanase from Fusarium oxysporum SS-25 employing two step statistical strategy and kinetic characterization of crude enzyme preparation. The Plackett-Burman design used to screen out the important factors in the culture medium revealed 20% (w/w) wheat bran, 2% (w/w) each of potato peels, soyabean meal and malt extract, 1% tryptone, 0.14% NH4SO4, 0.2% KH2PO4, 0.0002% ZnSO4, 0.0005% FeSO4, 0.01% MnSO4, 0.012% SDS, 0.03% NH4Cl, 0.1% NaNO3 in brewer’s spent grain based medium with 50% moisture content, inoculated with 2.8×107 spores and incubated at 30oC for 6 days to be the main parameters influencing the enzyme production. Of these factors, four variables including soyabean meal, FeSO4, MnSO4 and NaNO3 were chosen to study the interactive effects and their optimum levels in central composite design of response surface methodology with the final mannanase yield of 193 IU/gds. The kinetic characterization revealed the crude enzyme to be active over broader temperature and pH range. This could result in 26.6% reduction in kappa number with 4.93% higher tear index and 1% increase in brightness when used to treat the wheat straw based kraft pulp. The hydrolytic potential of enzyme was also demonstrated on both locust bean gum and guar gum.

Keywords: Brewer’s Spent Grain, Fusarium oxysporum, Mannanase, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5137
1091 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
1090 Research on the Methodologies of the Opportune Innovation - A Case Study of BYD

Authors: Guangjie Liu

Abstract:

The main purpose of this paper is to research on the methodologies of BYD to implement the opportune innovation. BYD is a Chinese company which has the IT component manufacture, the rechargeable battery and the automobile businesses. The paper deals with the innovation methodology as the same as the IPR management BYD implements in order to obtain the rapid growth of technology development with the reasonable cost of money and time.

Keywords: Opportune innovation, vertical integration, unpatenting integration, patenting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2812
1089 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: Thermal power plant, lignite coal, pre-treatment, demineralization, electrodialysis, recycling, waste water, process water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
1088 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
1087 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach

Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.

Keywords: CO2 emissions, performance based design, optimization, sustainable design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
1086 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings

Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez

Abstract:

Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.

Keywords: Life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861
1085 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800