Search results for: Model checking
301 Conservation Agriculture Practice in Bangladesh: Farmers’ Socioeconomic Status and Soil Environment Perspective
Authors: Mohammad T. Uddin, Aurup R. Dhar
Abstract:
The study was conducted to assess the impact of conservation agriculture practice on farmers’ socioeconomic condition and soil environmental quality in Bangladesh. A total of 450 (i.e., 50 focal, 150 proximal and 250 control) farmers from five districts were selected for this study. Descriptive statistics like sum, averages, percentages, etc. were calculated to evaluate the socioeconomic data. Using Enyedi’s crop productivity index, it was found that the crop productivity of focal, proximal and control farmers was increased by 0.9, 1.2 and 1.3 percent, respectively. The result of DID (Difference-in-difference) analysis indicated that the impact of conservation agriculture practice on farmers’ average annual income was significant. Multidimensional poverty index (MPI) indicates that poverty in terms of deprivation of health, education and living standards was decreased; and a remarkable improvement in farmers’ socioeconomic status was found after adopting conservation agriculture practice. Most of the focal and proximal farmers stated about increased soil environmental condition where majority of control farmers stated about constant environmental condition in this regard. The Probit model reveals that minimum tillage operation, permanent organic soil cover, and application of compost and vermicompost were found significant factors affecting soil environmental quality under conservation agriculture. Input support, motivation, training programmes and extension services are recommended to implement in order to raise the awareness and enrich the knowledge of the farmers on conservation agriculture practice.
Keywords: Conservation agriculture, crop productivity, socioeconomic status, soil environment quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146300 Pharmaceutical Microencapsulation Technology for Development of Controlled Release Drug Delivery systems
Authors: Mahmood Ahmad, Asadullah Madni, Muhammad Usman, Abubakar Munir, Naveed Akhtar, Haji M. Shoaib Khan
Abstract:
This article demonstrated development of controlled release system of an NSAID drug, Diclofenac sodium employing different ratios of Ethyl cellulose. Diclofenac sodium and ethyl cellulose in different proportions were processed by microencapsulation based on phase separation technique to formulate microcapsules. The prepared microcapsules were then compressed into tablets to obtain controlled release oral formulations. In-vitro evaluation was performed by dissolution test of each preparation was conducted in 900 ml of phosphate buffer solution of pH 7.2 maintained at 37 ± 0.5 °C and stirred at 50 rpm. At predetermined time intervals (0, 0.5, 1.0, 1.5, 2, 3, 4, 6, 8, 10, 12, 16, 20 and 24 hrs). The drug concentration in the collected samples was determined by UV spectrophotometer at 276 nm. The physical characteristics of diclofenac sodium microcapsules were according to accepted range. These were off-white, free flowing and spherical in shape. The release profile of diclofenac sodium from microcapsules was found to be directly proportional to the proportion of ethylcellulose and coat thickness. The in-vitro release pattern showed that with ratio of 1:1 and 1:2 (drug: polymer), the percentage release of drug at first hour was 16.91 and 11.52 %, respectively as compared to 1:3 which is only 6.87 % with in this time. The release mechanism followed higuchi model for its release pattern. Tablet Formulation (F2) of present study was found comparable in release profile the marketed brand Phlogin-SR, microcapsules showed an extended release beyond 24 h. Further, a good correlation was found between drug release and proportion of ethylcellulose in the microcapsules. Microencapsulation based on coacervation found as good technique to control release of diclofenac sodium for making the controlled release formulations.Keywords: Diclofenac sodium, Microencapsulationtechnology, Ethylcellulose, In-Vitro Release Profile
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3161299 Thermo-mechanical Deformation Behavior of Functionally Graded Rectangular Plates Subjected to Various Boundary Conditions and Loadings
Authors: Mohammad Talha, B. N. Singh
Abstract:
This paper deals with the thermo-mechanical deformation behavior of shear deformable functionally graded ceramicmetal (FGM) plates. Theoretical formulations are based on higher order shear deformation theory with a considerable amendment in the transverse displacement using finite element method (FEM). The mechanical properties of the plate are assumed to be temperaturedependent and graded in the thickness direction according to a powerlaw distribution in terms of the volume fractions of the constituents. The temperature field is supposed to be a uniform distribution over the plate surface (XY plane) and varied in the thickness direction only. The fundamental equations for the FGM plates are obtained using variational approach by considering traction free boundary conditions on the top and bottom faces of the plate. A C0 continuous isoparametric Lagrangian finite element with thirteen degrees of freedom per node have been employed to accomplish the results. Convergence and comparison studies have been performed to demonstrate the efficiency of the present model. The numerical results are obtained for different thickness ratios, aspect ratios, volume fraction index and temperature rise with different loading and boundary conditions. Numerical results for the FGM plates are provided in dimensionless tabular and graphical forms. The results proclaim that the temperature field and the gradient in the material properties have significant role on the thermo-mechanical deformation behavior of the FGM plates.
Keywords: Functionally graded material, higher order shear deformation theory, finite element method, independent field variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2334298 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams
Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas
Abstract:
Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.Keywords: Steel structures, web-expanded beams, Angelina™ beam, optimum design, failure modes, finite element analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493297 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India
Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab
Abstract:
Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.
Keywords: Climate change, coastal vulnerability index, global warming, sea level rise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565296 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.
Keywords: Defects, new homes, housing associations, organizational learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897295 The Cloud Systems Used in Education: Properties and Overview
Authors: Agah Tuğrul Korucu, Handan Atun
Abstract:
Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.
Keywords: Cloud systems, cloud systems in education, distance learning, e-learning, integration of information technologies, online learning environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018294 Scheduling Method for Electric Heater in HEMS Considering User’s Comfort
Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo Jin-O Kim
Abstract:
Home Energy Management System (HEMS), which makes the residential consumers, contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance, which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it represents impacts of the comfort level on the scheduling result.
Keywords: Load scheduling, usage pattern, user’s comfort, copula function, branch, bound, electric heater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075293 Knowledge Transfer among Cross-Functional Teams as a Continual Improvement Process
Authors: Sergio Mauricio Pérez López, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar, Adelina Morita Alexander
Abstract:
The culture of continuous improvement in organizations is very important as it represents a source of competitive advantage. This article discusses the transfer of knowledge between companies which formed cross-functional teams and used a dynamic model for knowledge creation as a framework. In addition, the article discusses the structure of cognitive assets in companies and the concept of "stickiness" (which is defined as an obstacle to the transfer of knowledge). The purpose of this analysis is to show that an improvement in the attitude of individual members of an organization creates opportunities, and that an exchange of information and knowledge leads to generating continuous improvements in the company as a whole. This article also discusses the importance of creating the proper conditions for sharing tacit knowledge. By narrowing gaps between people, mutual trust can be created and thus contribute to an increase in sharing. The concept of adapting knowledge to new environments will be highlighted, as it is essential for companies to translate and modify information so that such information can fit the context of receiving organizations. Adaptation will ensure that the transfer process is carried out smoothly by preventing "stickiness". When developing the transfer process on cross-functional teams (as opposed to working groups), the team acquires the flexibility and responsiveness necessary to meet objectives. These types of cross-functional teams also generate synergy due to the array of different work backgrounds of their individuals. When synergy is established, a culture of continuous improvement is created.Keywords: Knowledge transfer, continuous improvement, teamwork, cognitive assets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698292 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024291 Control of Vibrations in Flexible Smart Structures using Fast Output Sampling Feedback Technique
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
This paper features the modeling and design of a Fast Output Sampling (FOS) Feedback control technique for the Active Vibration Control (AVC) of a smart flexible aluminium cantilever beam for a Single Input Single Output (SISO) case. Controllers are designed for the beam by bonding patches of piezoelectric layer as sensor / actuator to the master structure at different locations along the length of the beam by retaining the first 2 dominant vibratory modes. The entire structure is modeled in state space form using the concept of piezoelectric theory, Euler-Bernoulli beam theory, Finite Element Method (FEM) and the state space techniques by dividing the structure into 3, 4, 5 finite elements, thus giving rise to three types of systems, viz., system 1 (beam divided into 3 finite elements), system 2 (4 finite elements), system 3 (5 finite elements). The effect of placing the sensor / actuator at various locations along the length of the beam for all the 3 types of systems considered is observed and the conclusions are drawn for the best performance and for the smallest magnitude of the control input required to control the vibrations of the beam. Simulations are performed in MATLAB. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the proposed smart system is evaluated for vibration control.Keywords: Smart structure, Finite element method, State spacemodel, Euler-Bernoulli theory, SISO model, Fast output sampling, Vibration control, LMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820290 The Micro Ecosystem Restoration Mechanism Applied for Feasible Research of Lakes Eutrophication Enhancement
Authors: Ching-Tsan Tsai, Sih-Rong Chen, Chi-Hung Hsieh
Abstract:
The technique of inducing micro ecosystem restoration is one of aquatic ecology engineering methods used to retrieve the polluted water. Batch scale study, pilot plant study, and field study were carried out to observe the eutrophication using the Inducing Ecology Restorative Symbiosis Agent (IERSA) consisting mainly degraded products by using lactobacillus, saccharomycete, and phycomycete. The results obtained from the experiments of the batch scale and pilot plant study allowed us to development the parameters for the field study. A pond, 5 m to the outlet of a lake, with an area of 500 m2 and depth of 0.6-1.2 m containing about 500 tons of water was selected as a model. After the treatment with 10 mg IERSA/L water twice a week for 70 days, the micro restoration mechanisms consisted of three stages (i.e., restoration, impact maintenance, and ecology recovery experiment after impact). The COD, TN, TKN, and chlorophyll a were reduced significantly in the first week. Although the unexpected heavy rain and contaminate from sewage system might slow the ecology restoration. However, the self-cleaning function continued and the chlorophyll a reduced for 50% in one month. In the 4th week, amoeba, paramecium, rotifer, and red wriggle worm reappeared, and the number of fish flies appeared up to1000 fish fries/m3. Those results proved that inducing restorative mechanism can be applied to improve the eutrophication and to control the growth of algae in the lakes by gaining the selfcleaning through inducing and competition of microbes. The situation for growth of fishes also can reach an excellent result due to the improvement of water quality.Keywords: Ecosystem restoration, eutrophication, lake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872289 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community
Authors: Mohamed Ghorab
Abstract:
Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.
Keywords: Distributed energy resources, network energy system, optimization, microgeneration system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940288 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach
Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan
Abstract:
Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.
Keywords: Efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846287 Food Security in the Middle East and North Africa
Authors: Sara D. Garduño-Diaz, Philippe Y. Garduño-Diaz
Abstract:
To date, one of the few comprehensive indicators for the measurement of food security is the Global Food Security Index (GFSI). This index is a dynamic quantitative and qualitative benchmarking model, constructed from 28 unique indicators, that measures drivers of food security across both developing and developed countries. Whereas the GFSI has been calculated across a set of 109 countries, in this paper we aim to present and compare, for the Middle East and North Africa (MENA), 1) the Food Security Index scores achieved and 2) the data available on affordability, availability, and quality of food. The data for this work was taken from the latest available report published by the creators of the GFSI, which in turn used information from national and international statistical sources. MENA countries rank from place 17/109 (Israel, although with resent political turmoil this is likely to have changed) to place 91/109 (Yemen) with household expenditure spent in food ranging from 15.5% (Israel) to 60% (Egypt). Lower spending on food as a share of household consumption in most countries and better food safety net programs in the MENA have contributed to a notable increase in food affordability. The region has also, however, experienced a decline in food availability, owing to more limited food supplies and higher volatility of agricultural production. In terms of food quality and safety the MENA has the top ranking country (Israel). The most frequent challenges faced by the countries of the MENA include public expenditure on agricultural research and development as well as volatility of agricultural production. Food security is a complex phenomenon that interacts with many other indicators of a country’s wellbeing; in the MENA it is slowly but markedly improving.
Keywords: Diet, food insecurity, global food security index, nutrition, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3995286 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.
Keywords: Boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038285 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744284 Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V
Authors: Mukul Shukla, Rasheedat M. Mahamood, Esther T. Akinlabi, Sisa. Pityana
Abstract:
Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.Keywords: Laser Metal Deposition, Material Efficiency, Microstructure, Ti6Al4V.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3629283 Autonomous Robots- Visual Perception in Underground Terrains Using Statistical Region Merging
Authors: Omowunmi E. Isafiade, Isaac O. Osunmakinde, Antoine B. Bagula
Abstract:
Robots- visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots- perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains.Keywords: Drivable Region Detection, Kinect Sensor, Robots' Perception, SRM, Underground Terrains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837282 The Effects of Seasonal Variation on the Microbial-N Flow to the Small Intestine and Prediction of Feed Intake in Grazing Karayaka Sheep
Authors: Mustafa Salman, Nurcan Cetinkaya, Zehra Selcuk, Bugra Genc
Abstract:
The objectives of the present study were to estimate the microbial-N flow to the small intestine and to predict the digestible organic matter intake (DOMI) in grazing Karayaka sheep based on urinary excretion of purine derivatives (xanthine, hypoxanthine, uric acid, and allantoin) by the use of spot urine sampling under field conditions. In the trial, 10 Karayaka sheep from 2 to 3 years of age were used. The animals were grazed in a pasture for ten months and fed with concentrate and vetch plus oat hay for the other two months (January and February) indoors. Highly significant linear and cubic relationships (P<0.001) were found among months for purine derivatives index, purine derivatives excretion, purine derivatives absorption, microbial-N and DOMI. Through urine sampling and the determination of levels of excreted urinary PD and Purine Derivatives / Creatinine ratio (PDC index), microbial-N values were estimated and they indicated that the protein nutrition of the sheep was insufficient.
In conclusion, the prediction of protein nutrition of sheep under the field conditions may be possible with the use of spot urine sampling, urinary excreted PD and PDC index. The mean purine derivative levels in spot urine samples from sheep were highest in June, July and October. Protein nutrition of pastured sheep may be affected by weather changes, including rainfall. Spot urine sampling may useful in modeling the feed consumption of pasturing sheep. However, further studies are required under different field conditions with different breeds of sheep to develop spot urine sampling as a model.
Keywords: Karayaka sheep, spot sampling, urinary purine derivatives, PDC index, microbial-N, feed intake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091281 Green Lean TQM Human Resource Management Practices in Malaysian Automotive Companies
Authors: Noor Azlina Mohd Salleh, Salmiah Kasolang, Ahmed Jaffar
Abstract:
Green Lean Total Quality Management (LTQM) Human Resource Management (HRM) System is a system comprises of HRM in Environmental Management System (EMS) practices which is integrated to TQM with Lean Manufacturing (LM) principles. HRM is essential especially in dealing with low motivation and less productive employees. The ultimate goal of this system is to focus on achieving total human resource development that is motivated and capable to optimize their creativity to be a part of Green and Lean TQM organization. A survey questionnaire was developed and distributed to 30 highly active automotive vendors in Malaysia and analyzed by Minitab v16 and SPSS v17. It was found out companies that are practicing Green LTQM HRM practices have generated more revenue and have RND capability. However, years of company establishment do not affect the openness of the company to adapt new initiatives that can help to improve the effectiveness of the operations. It was also found out the importance of training, communication and rewards for employees. The Green LTQM HRM practices framework model established in this study hopefully will give preliminary insight especially to companies that are still looking for system that can improve their productivity from managing human resource. This is preliminary study that combined 4 awards practices, ISO/TS16949, Toyota Production System SAEJ4000, MAJAICO Lean Production System and EMS focusing on highly active companies that have been involved in MAJAICO Program and Proton Vendor Development Program. Future study can be conducted to know the status at other industry as well as case study pertaining to this system.
Keywords: Automotive Industry, Lean Manufacturing, Operational Engineering Management, Total Quality Management. Environmental Management System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4186280 Effects of Heavy Pumping and Artificial Groundwater Recharge Pond on the Aquifer System of Langat Basin, Malaysia
Authors: R. May, K. Jinno, I. Yusoff
Abstract:
The paper aims at evaluating the effects of heavy groundwater withdrawal and artificial groundwater recharge of an ex-mining pond to the aquifer system of the Langat Basin through the three-dimensional (3D) numerical modeling. Many mining sites have been left behind from the massive mining exploitations in Malaysia during the England colonization era and from the last few decades. These sites are able to accommodate more than a million cubic meters of water from precipitation, runoff, groundwater, and river. Most of the time, the mining sites are turned into ponds for recreational activities. In the current study, an artificial groundwater recharge from an ex-mining pond in the Langat Basin was proposed due to its capacity to store >50 million m3 of water. The location of the pond is near the Langat River and opposite a steel company where >4 million gallons of groundwater is withdrawn on a daily basis. The 3D numerical simulation was developed using the Groundwater Modeling System (GMS). The calibrated model (error about 0.7 m) was utilized to simulate two scenarios (1) Case 1: artificial recharge pond with no pumping and (2) Case 2: artificial pond with pumping. The results showed that in Case 1, the pond played a very important role in supplying additional water to the aquifer and river. About 90,916 m3/d of water from the pond, 1,173 m3/d from the Langat River, and 67,424 m3/d from the direct recharge of precipitation infiltrated into the aquifer system. In Case 2, due to the abstraction of groundwater from a company, it caused a steep depression around the wells, river, and pond. The result of the water budget showed an increase rate of inflow in the pond and river with 92,493m3/d and 3,881m3/d respectively. The outcome of the current study provides useful information of the aquifer behavior of the Langat Basin.
Keywords: Groundwater and surface water interaction, groundwater modeling, GMS, artificial recharge pond, ex-mining site.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2655279 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises
Authors: Jiří F. Urbánek, David Král
Abstract:
Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.Keywords: Blazons, computational assistance, DYVELOP method, small and middle enterprises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703278 Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions
Authors: Jo-Han Chang, Chung-Jung Wu
Abstract:
This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10cm) and B (62cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seatarrangement is limited to face-to-face design, the distance between seats should be increased to at least 62cm. Moreover, the distance between seats should be increased to at least 114cm for side- by-side seats to elevate user comfort.
Keywords: Individual Subjective Perceptions, Personal Space, Seat Arrangement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923277 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based On Li-ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries.
In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.
Keywords: ZigBee, Li-ion battery, solar panel, CC2530.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3091276 The Study of Implications on Modern Businesses Performances by Digital Communities: Case of Data Leak
Authors: Asim Majeed, Anwar Ul Haq, Mike, Lloyd-Williams, Arshad Jamal, Usman Butt
Abstract:
This study aims to investigate the impact of data leak of M&S customers on digital communities. Modern businesses are using digital communities as an important public relations tool for marketing purposes. This form of communication helps companies to build better relationship with their customers which also act as another source of information. The communication between the customers and the organizations is not regulated so users may post positive and negative comments. There are new platforms being developed on a daily basis and it is very crucial for the businesses to not only get themselves familiar with those but also know how to reach their existing and perspective consumers. The driving force of marketing and communication in modern businesses is the digital communities and these are continuously increasing and developing. This phenomenon is changing the way marketing is conducted. The current research has discussed the implications on M&S business performance since the data was exploited on digital communities; users contacted M&S and raised the security concerns. M&S closed down its website for few hours to try to resolve the issue. The next day M&S made a public apology about this incidence. This information was proliferated on various digital communities and it has impacted negatively on M&S brand name, sales and customers. The content analysis approach is being used to collect qualitative data from 100 digital bloggers including social media communities such as Facebook and Twitter. The results and finding provide useful new insights into the nature and form of security concerns of digital users. Findings have theoretical and practical implications. This research will showcase a large corporation utilizing various digital community platforms and can serve as a model for future organizations.
Keywords: Digital, communities, performance, dissemination, implications, data, exploitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817275 Effect of High-Energy Ball Milling on the Electrical and Piezoelectric Properties of (K0.5Na0.5)(Nb0.9Ta0.1)O3 Lead-Free Piezoceramics
Authors: Chongtham Jiten, K. Chandramani Singh, Radhapiyari Laishram
Abstract:
Nanocrystalline powders of the lead-free piezoelectric material, tantalum-substituted potassium sodium niobate (K0.5Na0.5)(Nb0.9Ta0.1)O3 (KNNT), were produced using a Retsch PM100 planetary ball mill by setting the milling time to 15h, 20h, 25h, 30h, 35h and 40h, at a fixed speed of 250rpm. The average particle size of the milled powders was found to decrease from 12nm to 3nm as the milling time increases from 15h to 25h, which is in agreement with the existing theoretical model. An anomalous increase to 98nm and then a drop to 3nm in the particle size were observed as the milling time further increases to 30h and 40h respectively. Various sizes of these starting KNNT powders were used to investigate the effect of milling time on the microstructure, dielectric properties, phase transitions and piezoelectric properties of the resulting KNNT ceramics. The particle size of starting KNNT was somewhat proportional to the grain size. As the milling time increases from 15h to 25h, the resulting ceramics exhibit enhancement in the values of relative density from 94.8% to 95.8%, room temperature dielectric constant (εRT) from 878 to 1213, and piezoelectric charge coefficient (d33) from 108pC/N to 128pC/N. For this range of ceramic samples, grain size refinement suppresses the maximum dielectric constant (εmax), shifts the Curie temperature (Tc) to a lower temperature and the orthorhombic-tetragonal phase transition (Tot) to a higher temperature. Further increase of milling time from 25h to 40h produces a gradual degradation in the values of relative density, εRT, and d33 of the resulting ceramics.
Keywords: Ceramics, Dielectric, High-energy milling, Perovskite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2596274 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media
Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding
Abstract:
A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922273 Analysis of Vortex-Induced Vibration Characteristics for a Three-Dimensional Flexible Tube
Authors: Zhipeng Feng, Huanhuan Qi, Pingchuan Shen, Fenggang Zang, Yixiong Zhang
Abstract:
Numerical simulations of vortex-induced vibration of a three-dimensional flexible tube under uniform turbulent flow are calculated when Reynolds number is 1.35×104. In order to achieve the vortex-induced vibration, the three-dimensional unsteady, viscous, incompressible Navier-Stokes equation and LES turbulence model are solved with the finite volume approach, the tube is discretized according to the finite element theory, and its dynamic equilibrium equations are solved by the Newmark method. The fluid-tube interaction is realized by utilizing the diffusion-based smooth dynamic mesh method. Considering the vortex-induced vibration system, the variety trends of lift coefficient, drag coefficient, displacement, vertex shedding frequency, phase difference angle of tube are analyzed under different frequency ratios. The nonlinear phenomena of locked-in, phase-switch are captured successfully. Meanwhile, the limit cycle and bifurcation of lift coefficient and displacement are analyzed by using trajectory, phase portrait, and Poincaré sections. The results reveal that: when drag coefficient reaches its minimum value, the transverse amplitude reaches its maximum, and the “lock-in” begins simultaneously. In the range of lock-in, amplitude decreases gradually with increasing of frequency ratio. When lift coefficient reaches its minimum value, the phase difference undergoes a suddenly change from the “out-of-phase” to the “in-phase” mode.
Keywords: Vortex induced vibration, limit cycle, CFD, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469272 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994